DOI QR코드

DOI QR Code

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 2) Automation, Implementation, and Experimental Results

  • Lari, Zahra (Department of Geomatics Engineering, University of Calgary) ;
  • Habib, Ayman (Department of Geomatics Engineering, University of Calgary) ;
  • Mazaheri, Mehdi (Department of Geomatics Engineering, University of Calgary) ;
  • Al-Durgham, Kaleel (Department of Geomatics Engineering, University of Calgary)
  • Received : 2014.03.13
  • Accepted : 2014.05.29
  • Published : 2014.06.30

Abstract

Multi-camera systems have been widely used as cost-effective tools for the collection of geospatial data for various applications. In order to fully achieve the potential accuracy of these systems for object space reconstruction, careful system calibration should be carried out prior to data collection. Since the structural integrity of the involved cameras' components and system mounting parameters cannot be guaranteed over time, multi-camera system should be frequently calibrated to confirm the stability of the estimated parameters. Therefore, automated techniques are needed to facilitate and speed up the system calibration procedure. The automation of the multi-camera system calibration approach, which was proposed in the first part of this paper, is contingent on the automated detection, localization, and identification of the object space signalized targets in the images. In this paper, the automation of the proposed camera calibration procedure through automatic target extraction and labelling approaches will be presented. The introduced automated system calibration procedure is then implemented for a newly-developed multi-camera system while considering the optimum configuration for the data collection. Experimental results from the implemented system calibration procedure are finally presented to verify the feasibility the proposed automated procedure. Qualitative and quantitative evaluation of the estimated system calibration parameters from two-calibration sessions is also presented to confirm the stability of the cameras' interior orientation and system mounting parameters.

Keywords

1. Introduction

In recent years, multi-camera systems have gained more popularity due to their ability in quickly and economically collecting geospatial data. These systems include multiple integrated low-cost cameras mounted on a kinematic or static platform. The multi-camera systems can be used for different mapping, modelling, and 3D surface reconstruction applications. In order to achieve the desired accuracy of these systems for the object space reconstruction process, an appropriate system calibration procedure should be performed prior to data collection. The calibration of a multi-camera system is accomplished when the involved cameras in the system are calibrated and the mounting parameters relating the different system components are estimated (Habib et al., 2011; Rau et al., 2011). Multi-camera systems usually involve low-cost digital cameras, where the structural integrity of their components cannot be guaranteed. Moreover, the stability of the system mounting parameters might change over time. Therefore, the system calibration should be frequently performed to confirm the stability of the cameras’ Interior Orientation Parameters (IOPs) and the system mounting parameters before the data collection for a given task. Hence, the photogrammetric community is interested in developing automated techniques for multi-camera system calibration with minimal preparation/user interaction before/during the calibration procedure. The automation of a multi-camera system calibration procedure depends upon the automated measurements of the image coordinates of specifically-designed targets, which are generally utilized to facilitate the automated image coordinate measurements. These targets should be efficiently detected and localized in the images regardless of their location, scale, orientation, and contrast.

So far, different types of targets have been designed for the automated photogrammetric system calibration such as crosses (Mikhail and Cantiller, 1985), black dots on a white background (Beyer, 1992), retro-reflective targets (Brown, 1984), laser-light projected targets (Clarke and Katsimbris, 1994), and color-coded targets (Cronk et al., 2006). These targets provide enough contrast with the background and can be easily identified in the images. However, they are usually expensive and require specific data acquisition constraints (e.g., using retro-reflective targets), and specific targets set-up (e.g., using laser-light projected targets). For colorcoded targets, the precision of the target localization may be deteriorated by chromatic aberrations.

In the first part of this paper, a new single-step approach was proposed for the calibration of either directly or indirectly georeferenced multi-camera systems. The proposed system calibration procedure aims at estimating the IOPs of the individual cameras and the mounting parameters relating the system components. The objective of this part of the paper is to investigate the automation of the introduced system calibration procedure. The automation of the proposed multicamera system calibration approach is contingent on the automated detection, localization, and identification of object space signalized targets in the images. In order to achieve this objective, two sets of signalized targets are designed and utilized. The first set of signalized targets is checkerboard targets which will be implemented for the automation of the proposed multi-camera system calibration approach through bundle adjustment with self-calibration procedure. In this regard, a new target detection and labelling procedure is introduced for the extraction and identification of instance of the checkerboard targets in the images. The labelling procedure of these targets can only be established if we have reasonable estimates of the ground coordinates of the object space targets as well as the Exterior Orientation Parameters (EOPs) of the images. In this research, object space targets are established on a 2D test-field which has been printed to scale from a CAD file. Therefore, the coordinates of the CAD file are used as good estimates for the object space coordinates of the checkerboard targets. In order to estimate the EOPs of the images, a set of corresponding targets in the object and image space are required. Therefore, specificallydesigned coded targets – which are the second set of utilized signalized targets in this research work – are used to facilitate the labelling procedure of the checkerboard targets. The object and image space coordinates of the coded targets are used in a two-step procedure – which have been proposed in the first part of this paper – to estimate the EOPs of the involved images. The most important advantage of the coded targets, when compared to the checkerboard signalized targets, is that they can be easily labelled in the image space – without the need for their object space coordinates – due to their uniqueness. In order to automatically extract and label instances of the coded targets in the images, a sequence of image processing techniques is introduced in this paper. Then, the estimated EOPs using the coded targets together with the object-space coordinates of the checkerboard targets are used for the labelling/identification of the latter targets. Finally, a bundle-adjustment with self-calibration is used for the estimation of the multi-camera system parameters. The introduced steps for the automated detection, extraction, and identification of the different targets are summarized in Fig. 1.

Fig. 1.Outline of the proposed approach for the coded/ checkerboard target detection, extraction, and labelling

This paper begins by the introduction of the utilized coded and checkerboard targets and the proposed image processing techniques for their detection, localization, and labelling. The implementation of the automated multicamera system calibration approach for a newly-developed multi-camera system is then presented in the next section. Finally, experimental results using real data are provided to verify the feasibility of the automated single-step multicamera system calibration approach and test the stability of the involved cameras’ IOPs.

 

2. Automation of the Multi-camera System Calibration

As mentioned earlier, the automation of the proposed multi-camera system calibration is achieved when the EOPs of the individual images are reasonably approximated and the signalized object space targets are automatically detected, localized, and labelled in the images. In the first part of section, the designed coded targets for estimating the EOPs of the images – which will be later utilized for labelling the checkerboard targets – will be introduced. The proposed approach for the detection and identification of instances of these targets in the images will also be discussed. In the second part of this section, the proposed approach for the detection, localization, and labelling the instances of checkerboard targets in the images will be presented. This approach combines the computational efficiency and precise localization capability of two well-known interest operators (Harris and Förstner operators) in a hybrid corner detector to improve the performance of the target extraction procedure.

2.1 Design, automated detection, and identification of coded targets

The automation of the proposed multi-camera system calibration procedure starts by the detection and identification of instances of coded targets in the images. These targets should be designed to be robustly recognized in the images regardless of their rotation, scale, and orientation. In this work, twelve coded targets, as shown in Fig. 2, have been used. These targets include 8 white circles on a black rectangular background with white border. Such colour selection is utilized to maintain the maximum contrast of a target on a complex image background. The white circles have been arranged in three parallel rows. The first row includes four white circles and the second row includes at least two white circles. The arrangement of the four white circles residing on the second and third rows will determine the label of the coded target in question.

Fig. 2.The designed coded targets for the retrieval of the EOPs of the images

In order to automatically detect these coded targets, a sequential image processing procedure is implemented in C# environment. In this procedure, the image is firstly binarized for the simplification of the target recognition process. In the next step, a blob detection algorithm is implemented to find circular blobs and rectangular shapes, both of which are included in the designed coded targets, within the binarized image. The possible distortions in the images (lens distortions and perspective projection of oblique test field with respect to the image plane) are considered during this blob detection procedure to avoid missing distorted circular and rectangular shapes. Instances of the designed coded targets are then identified as rectangular shapes which include eight circular blobs. In order to label a detected coded target, we need to first identify the four white circular blobs residing on its first row. Then, four lines are defined which pass through the centers of the circular blobs on the first row and the orientation of the lines are established as the average orientation of the sides of the rectangular shape neighbouring to the first row of the circular blobs. The arrangement of the other four circular blobs on the defined lines will determine the label of the coded target in question. The coordinates of the intersection point of the rectangular shape diagonals will be finally used as the image coordinates of the identified coded target. Fig. 3 shows how the identification procedure of these coded targets is carried out. The image coordinates of the coded targets, together with their object space coordinates are then used in the two-step procedure, proposed in the first part of this work, to estimate the EOPs of the captured images. The estimated EOPs are then used for labelling the utilized checkerboard targets.

Fig.3.Automated identification of coded targets: (a) a coded target (type A), (b) detected circular blobs (within the gray border) and rectangular shapes (within the black border), (c) detected four collinear circular blobs (on the horizontal line), and (d) the lines passing through the four collinear points and parallel to rectangular shape sides (vertical lines)

2.2 Automated detection, localization, and labelling of checkerboard targets

The automation of the proposed multi-camera system calibration approach continues by the detection, localization, and labelling of instances of checkerboard targets in the images. In the first step of this target extraction and labelling procedure, the Canny edge detection algorithm (Canny, 1986) is employed to abstract the huge amount of data in the images while preserving the geometric integrity of the edges bounding the targets of interest. The second step deals with the detection and localization of instances of the checkerboard targets using a hybrid corner detection approach. In the final step, the IDs for the automatically-extracted checkerboard targets from the images are assigned using the available object space coordinates of these targets.

In order to detect instances of the checkerboard targets in the images, the detection of corner points is beneficial. The corner points are the main constituents of the checkerboard targets. These points are defined as the points where several dominant edges with different orientation exist in their local neighbourhood (Förstner and Gülch, 1987). In order to extract the corner points from the images, image processing algorithms, which are commonly known as interest operators, are employed. In this research, two well-known interest operators (i.e., Harris and Förstner operators) are briefly reviewed and applied in a hybrid procedure for efficient and precise corner detection and localization. The Harris interest operator was introduced by Harris and Stephens (1988) as an improvement upon the classical Moravec operator (Moravec, 1977). This operator considers the local changes in the gradients of the extracted edges in both row and column directions to find corner points. A corner detector measure is then computed using the estimated gradients to determine whether a given edge pixel represents a corner point or not. The Harris interest operator is generally accepted as an efficient approach for corner detection due to its simplicity and computational efficiency compared to other interest operators. However, it is very sensitive to inherent noise in the image (since it relies on gradient information) and suffers from poor localization (El-Hakim, 2002; Remondino, 2006). The Förstner interest operator was introduced with the aim of identifying corner points and interest regions in the images (Förstner and Gülch, 1987). The Förstner interest operator, which was proposed by Förstner and Gülch (1987), is based on the assumption that a corner point is the point that is statistically closest to all edge elements intersecting at that corner. A Least Squares Adjustment (LSA) is devised to estimate the coordinates of the interest points with sub-pixel accuracy. Using the normal equation matrix of the LSA, this operator then evaluates the quality of the corner points by analysing the shape and the size of error ellipses describing the variance-covariance matrix (i.e., the inverse of the normal equation matrix) associated with the derived corner location. Reliable corner points should have a near circular error ellipse with a small size. Due to its precise localization of corner points, the Förstner operator is widely used for different photogrammetric applications. However, it is computationally inefficient (Zhang, et al., 2001; Remondino, 2006). The computational inefficiency is attributed to the fact that evaluating the size and shape of the error ellipses requires moving a scanning window through the whole image to evaluate the error-ellipse quality measures and having a local non-maxima suppression of these measures.

In order to exploit the computational efficiency and simplicity of the Harris operator and the precise positioning capability of the Förstner interest operator, a hybrid corner detection approach is utilized in this research. In the first step of this approach, the Harris interest operator is applied to find the approximate corner locations in an image. The Förstner interest operator is then employed to precisely localize the detected corner points. Therefore, the approximate corner locations, which have been determined through the Harris corner detection procedure, will be refined by estimating the precise coordinates of the point which is statistically closest to the extracted edges within the defined window – i.e., the new corner coordinates will be estimated by placing the window at the Harris-based interest point. The proposed procedure eliminates the Förstner operator component that evaluates the quality of all possible corners in the image since reliable corner points are already determined using the Harris interest operator. In order to verify the computational efficiency of the proposed hybrid approach, the processing times for the detection of corner points, while using the introduced hybrid approach and the Förstner operator, are listed in Table 1. The comparison of the tabulated processing times shows that the implementation of the proposed hybrid approach considerably improves the efficiency of the corner detection procedure.

Table 1.The required processing time for the detection of corner points in a single image using the Förstner operator and the proposed hybrid approach

2.2.1 Checkerboard target detection

The precise localization of the centers of the checkerboard targets as well as many other features has been already established by the introduced corner detection and localization procedure. Therefore, the remaining issue is the detection of instances of the checkerboard targets among the localized corner points. This detection is carried out by identifying the corner points which correspond to a predefined checkerboard template whose dimensions are less than the size of the checkerboard target (Fig. 4(a)). A template matching process is performed at each corner location. A checkerboard target will be declared where the correlation value between the local window defined at that corner location and the checkerboard template exceeds a predefined threshold. Since the orientation of the checkerboard targets may be different within the images due to the target setup and/or κ-rotation during image acquisition (Fig. 4(b)), both positive and negative correlation measures will be considered for the detection of instances of these targets. Fig. 5 demonstrates the intermediate and final results of the checkerboard target localization and detection procedures.

Fig. 4.(a) The checkerboard template and (b) possible orientation variation of checkerboard targets as a result of κ-rotation during image acquisition

Fig. 5.Checkerboard target localization and detection procedures: (a) original image, (b) detected corners by the Harris Operator (yellow crosses), (c) modified corners by the Förstner operator (red crosses), and (d) the localized and detected center of the checkerboard target after correlation filtering (green cross)

2.2.2 Automated target labelling

So far, we have discussed the proposed approaches for the detection and precise localization of the checkerboard targets. In the next step, we need to assign the proper IDs to the automatically-extracted targets. This labelling process is carried out by assigning the IDs of a set of object space checkerboard targets within the calibration test field to the automatically-extracted instances of these targets in the image space. To solve such problem, an automated procedure for the labelling of these targets in different images is introduced in this section. In this procedure, the EOPs of the images – estimated through the linear-based projective transformation and refined through the SPR procedure – together with the nominal IOPs of the utilized camera, are utilized to project the coordinates of the object space targets onto the image space. In order to automatically label the extracted checkerboard targets, they are firstly organized in a two-dimensional kd-tree structure. This structure is established to optimize the search for the nearest neighbouring extracted targets to the projected object points. The projected target and its nearest extracted target in a given image will be considered a match only if the distance between them is not more than half of the average distance between the automatically-extracted target in question and its k neighbouring extracted targets. For a matched pair, the ID of the projected object target will be assigned to its nearest automatically-extracted image target. Fig. 6 shows a projected object target (red cross) and its neighbouring automatically-extracted image target (cyan cross).

Fig. 6.Projected object target (red cross) and its neighbouring automatically-extracted target (cyan cross)

 

3. Implementation of the Proposed Multi camera System Calibration Procedure

In this section, a newly-developed multi-camera system, which has been designed and developed for different metrology applications, is introduced and the implemented procedure for the calibration of this system is presented. This section starts by a brief introduction of the developed multicamera system architecture. Afterwards, the implemented procedure for the collection of the required data for the system calibration will be described.

3.1 System architecture

The introduced multi-camera system includes seven lowcost digital cameras (Canon EOS Rebel T3) mounted on a reinforced arc-shape aluminium arm (Fig. 7). The utilized cameras have an array dimension of 4272×2848 pixels with 5.2 μm pixel size and a 30 mm nominal focal length. They are also equipped with an electronic shutter that is suitable for extended operations at high image acquisition frequency (up to 3 fps). These cameras are aligned in a way to capture convergent images of the object of interest which is usually 1-1.5 m away from the central camera.

Fig. 7.The designed multi-camera system

For such a multi-camera system, synchronization of the digital cameras is essential for the simultaneous image acquisition of a dynamic object. For synchronized image acquisition, all of the cameras are initially connected to a host computer through a USB hub. The synchronization of the integrated cameras is then carried out using a software application, which is based on the Canon Digital Camera Software Development Kit (CD-SDK). This application is used for initializing the individual camera settings, commanding the cameras to simultaneously capture the images, and downloading the images from the cameras to the host computer.

3.2 Data collection

As mentioned earlier, in order to determine the system calibration parameters, a bundle adjustment procedure using object space information is performed. In this research, object space targets – checkerboard and coded targets – are established on a board test-field which has been printed to scale from a CAD file (Fig. 8). Therefore, the coordinates of the CAD file are used as good estimates of the object space coordinates of the checkerboard targets. Six coordinates of three non-collinear targets and some distances are utilized to define the minimum datum parameters for the bundle adjustment with self-calibration procedure. The object space coordinates of the checkerboard/coded targets are then approximately derived relative to this datum.

Fig. 8.Sample image of the test field with checkerboard and coded targets

Afterwards, we need to acquire multiples images of this test field. In order to avoid dependencies between the IOPs and EOPs within the system calibration adjustment, convergent images and images in portrait and landscape mode should be acquired. These images can be acquired either by rotating the individual cameras with respect to the test field or rotating the test field with respect to the fixed cameras. In this research, since the cameras have been rigidly fixed on the designed arm, the test field board is rotated during image acquisition. Fig. 9 illustrates the implemented rotations of the test field relative to the multi-camera system for the image capture and Fig. 10 shows the position and orientation of the captured images by the central camera using the proposed acquisition procedure.

Fig. 9.Image acquisition scheme for multi-camera system calibration

Fig. 10.The distribution (position and orientation) of the captured images (the green, blue, and red lines represent that x, y, and z axes of the camera coordinate system for the different images) and object space targets (black crosses)

 

4. Experimental Results

In this section, experiments using the collected data are conducted to demonstrate the feasibility of the introduced automated single-step procedure for multi-camera system calibration. The first set of experiments is implemented to quantitatively and qualitatively assess the outcome of the utilized automated target extraction procedure for the automation of the multi-camera system calibration. The second set of experiments is conducted to evaluate the performance of the implemented multi-camera system calibration, investigate the stability of the camera’s calibration parameters and system mounting parameters, and analyse the quality of the reconstructed object space from two different calibration sessions. Both experiments are based on two datasets with 176 images collected at 31 stations for each calibration session. In the following subsections, these experiments will be presented and their results will be analysed.

4.1 Performance analysis of the automated target detection procedure

The objective of this experiment is to evaluate the performance of the proposed procedure for the automated extraction of the coded and checkerboard targets in the images. This evaluation is performed by considering the detection rate of different targets and the quality of their localization in the images. Table 2 lists the total number of visible coded and checkerboard targets in the collected images for the two calibration sessions. The detection rate of different target types – the percentage of automatically-extracted targets – and the accuracy of the extracted targets, which has been evaluated through the a-posteriori variance factor of the bundle adjustment with self-calibration procedure, are also reported in Table 2. Based on the reported results in Table 2, one can observe that a higher percentage of coded targets are automatically extracted when compared to the checkerboard targets (i.e., 94% and 96% of the coded targets have been automatically extracted while only 81% and 84% of the checkerboard targets have been automatically extracted). Therefore, it can be concluded that the automated extraction of coded targets is more robust to the imaging conditions when compared to checkerboard targets. The investigation of the estimated accuracy for automatically-extracted targets verifies that these targets have been localized with sub-pixel accuracy (i.e., close to 1/3 of the pixel size).

Table 2.The automated detection rate and localization accuracy of coded and checkerboard targets for collected datasets in two calibration sessions

4.2 Qualitative and quantitative stability analysis of the estimated system calibration parameters

This set of experiments is conducted to investigate the performance of the proposed automated approach for multicamera system calibration and evaluate the stability of the estimated system calibration parameters. The calibration procedure is performed using the bundle adjustment with self-calibration, which is introduced in the first part of this paper, through an indirect georeferencing procedure. The utilized distortion model for this self-calibration procedure includes K1 and K2 radial lens distortion parameters while ignoring the de-centring lens distortion and in-plane distortion coefficients, since they are deemed insignificant for the utilized camera. Table 3 reports the camera calibration results for the individual cameras within the multi-camera system from the two calibration sessions where xp and yp, c, K1 and K2 represent the principal point coordinates, principal distance, and radial lens distortion coefficients, respectively. We can observe in Table 3 that the IOPs of the individual cameras have been estimated with high precision in both calibration sessions.

Table 3.Camera calibration results using the proposed technique for two calibration sessions

Qualitative evaluation of the IOPs – individual cameras’ calibration parameters – by comparing derived parameters from the two calibration sessions reveals that the estimated IOPs are stable. Quantitative evaluation of the estimated IOPs from the two calibration sessions is also performed using the proposed camera stability analysis approaches by Habib et al. (2006) to verify the equivalency of the estimated IOPs from the two calibration sessions. The IOP equivalency analysis is based on evaluating the degree of similarity between the reconstructed bundles of light rays from two IOP sets using either the Zero ROTation (ZROT) or ROTation (ROT) approaches. The difference between these approaches is the imposed constraints on the position and orientation of the two bundles prior to the evaluation of the similarity measures. In the ZROT method, the two bundles are forced to share the same perspective center and optical axis. The ROT method, on the other hand, allows for relative rotation between the two bundles while sharing the same perspective center to get the best alignment between the bundles. The ZROT approach will therefore be stricter since it does not allow for any shift or rotation to co-align the reconstructed bundles of light rays. In both approaches, the Mean Square Error of the discrepancies between conjugate points following their projection from one image plane onto the other one (RMSEoffset) is used as the quantitative measure for evaluating the degree of similarity between the two bundles. The two bundles (IOP sets) will be deemed equivalent if the RMSEoffset is within the expected noise level in the image coordinate measurement accuracy, which is usually in the range of half a pixel. Table 4shows the estimated ZROT and ROT similarity measures between the derived IOPs of the involved cameras from the two calibration sessions. While considering the reported similarity measures in Table 4, one can conclude that the estimated IOPs for the individual cameras from the two calibration sessions are quite similar (RMSEoffset is less than or in the range of half a pixel).

Table 4.Quantitative similarity analysis of the estimated IOPs for the individual cameras from two calibration sessions

Table 5 reports the estimated mounting parameters (leverarm components and boresight angles) among the involved cameras using the proposed single-step procedure for the two calibration sessions. In the reported results, camera “4” was taken as the reference camera (i.e., the position and the orientation of the platform refer to the position and orientation of camera “4”). We can observe in Table 5 that the mounting parameters among the cameras have been estimated with high precision for both calibration sessions (i.e., the standard deviations for the boresight angles range from ±3.34" to ±23.27", while the lever-arm offsets have been estimated with a precision ranging from ±0.03 to ±0.29 mm). Also, the lever-arm offsets are very close to the physically measured values. The estimated a-posteriori variance factors, for both calibration experiments, are also reported in Table 5, confirming the validity of the derived system calibration results and the quality of the target localization process. Table 6 presents the quantitative similarity analysis results of the estimated mounting parameters relating individual cameras to the reference one from the two calibration sessions. The small mean, standard deviation, and RMSE values for the differences between the system mounting parameters estimated through the two calibration sessions verifies the stability/similarity of these parameters.

Table 5.Estimated a-posteriori variance factor and system mounting parameters (lever-arm components and boresight angles) w.r.t. camera 4 (reference camera) for the two calibration sessions

Table 6.The similarity analysis of the estimated system mounting parameters from the two calibation sessions

In order to verify the equivalency of the reconstructed object space using the two calibration sessions, an RMSE analysis of the estimated object space coordinates for sixty checkerboard targets is performed and reported in Table 7 (i.e., the derived coordinates of the checkerboard targets from the two calibration sessions are compared to each other). The small RMSE values confirm that the estimated object space coordinates from the two calibration sessions are quite similar.

Table 7.RMSE analysis of the reconstructed object space (using sixty checkerboard targets) from the two calibration sessions

 

5. Conclusions and Recommendations for Future Research Work

Multi-camera systems have been recognized as fast, light-weight, and cost-effective tools for the derivation of 3D data pertaining to physical surfaces. These systems are being widely used for providing the required information for 3D photogrammetric reconstruction to satisfy the needs of a wide range of applications. The potential accuracy of these systems for object space reconstruction is achieved only when a careful system calibration is performed prior to data collection procedure. In the first part of this paper, a new multi-camera system calibration approach is introduced which can be applied for either indirectly or directly georeferened multi-camera systems. In the second part of this paper, the automation of the proposed system calibration procedure is presented. The automation of this procedure is carried out through the utilization of specifically-designed coded and signalized targets. This paper introduced new approaches for the automated detection, precise localization, and labelling of the instances of the utilized targets in the images. The detection, localization, and labelling of the coded targets was performed while considering the number and arrangement of circular shapes bounded by a rectangular shape. A computationally-efficient and precise hybrid corner localization interest operator was then introduced for the detection and localization of the instances of checkerboard targets in the images. This interest operator takes advantage of the complementary characteristics of two well-known interest operators (i.e., Harris and Förstner operators). Then, an automated technique was used to assign the proper IDs for the extracted targets.

Furthermore, a newly-developed multi-camera system was introduced and the implemented procedure for the calibration of the involved cameras and system mounting parameters was described, in this paper. Experimental results using real data were finally provided to verify the feasibility of the proposed automated approach for the target detection, localization, and labelling as well as the single-step approach for the estimation of the cameras’ calibration parameters and system mounting parameters. This procedure provides very good target detection rate, sub-pixel localization, and accurate estimates for the camera calibration parameters and the mounting parameters among the cameras due to the explicit enforcement of relative orientation constraints. The comparison of the derived calibration parameters from two different calibration sessions was also carried out to ensure the stability of the cameras’ IOPs and the mounting parameters relating the involved camera to the reference camera.

Future research work will focus on further testing of the performance of the proposed method/model using real datasets from different multi-camera systems (either directly or indirectly georeferenced). Moreover, the optimum imaging and control configuration for reliable estimation of system mounting parameters will be investigated. Standard procedures will be also developed to conduct the stability analysis of the system calibration parameters to confirm the invariability of these parameters prior to a given mapping task. In addition, the expansion of the proposed approach for multi-laser scanner system calibration and stability analysis will be investigated.

References

  1. Beyer, H. (1992), Geometric and Radiometric Analysis of a CCD Camera Based Photogrammetric Closerange System, Ph.D. dissertation, Institut fur Geodasie und Photogrammetrie, ETH-Honggerberg, Zurich, Switzerland.
  2. Brown, D.C. (1984), Tools of the Trade, Close-range Photogrammetry and Surveying - state of the Art, The American Society of Photogrammetry, USA, 941p.
  3. Canny, J. (1986), A computational approach to edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-8, No. 6, pp. 679-698.
  4. Clarke, T.A. and Katsimbris, A. (1994), The use of diode laser collimators for targeting 3-D objects, International Archives of Photogrammetry and Remote Sensing, Vol. 30, No. 5, pp. 47-54.
  5. Cronk, S., Fraser, C., and Hanley, H. (2006), Automated metric calibration of colour digital cameras, The Photogrammetric Record, Vol. 21, No. 116, pp. 355-372. https://doi.org/10.1111/j.1477-9730.2006.00380.x
  6. El-Hakim, S.F. (2002), Semi-automatic 3D reconstruction of occluded and unmarked surfaces from widely separated views. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 34, No. B5, pp. 143-148.
  7. Forstner, W. and Gulch, E. (1987), A fast operator for detection and precise location of distinct points, corners and centres of circular features, Proceedins of ISPRS Intercommission Workshop, Interlaken, Switzerland, pp. 281-305.
  8. Habib, A., Kersting, A.P., Bang, K.-I., and Rau, J. (2011), A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system, Archives of Photogrammetry, Cartography and Remote Sensing, Vol. 22, pp. 173-185.
  9. Harris, C. and Stephens, M. (1988), A combined corner and edge detector, Proceedings of Fourth Alvey Vision Conference, pp. 147-151.
  10. Jazayeri, I. and Fraser, C.S. (2010), Interest operators for feature-based matching in close range photogrammetry, The Photogrammetric Record, Vol. 25, No. 129, pp. 24-41. https://doi.org/10.1111/j.1477-9730.2009.00559.x
  11. Mikhail, E.M. and Cantiller, D.B. (1985), Geometric effects of digital image processing operations, Proceedings of 51st Annual Meeting of ASPRS, pp. 717-724.
  12. Moravec, H. P. (1977), Towards automatic visual obstacle avoidance, Proceedings of Fifth International Joint Conference on Artificial Intelligence, pp. 584-584.
  13. Rau, J.-Y., Habib, A.F., Kersting, A.P., Chiang, K.-W., Bang, K.-I., Tseng, Y.-H., and Li, Y.-H. (2011), Direct sensor orientation of a land-based mobile mapping system, Sensors, Vol. 11, No. 7, pp. 7243-7261. https://doi.org/10.3390/s110707243
  14. Remondino, F. (2006), Detectors and descriptors for photogrammetric applications, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 36, No. 3, pp. 49-54.
  15. Zhang, Y., Lin, Z., and Zhang, J. (2001), An image mosaicking approach based on image matching and adjustment, Journal of Image and Graphics, Vol. 6(A), No. 4, pp. 338-342.

Cited by

  1. 단일 카메라 캘리브레이션과 스테레오 카메라의 캘리브레이션의 비교 vol.36, pp.4, 2018, https://doi.org/10.7848/ksgpc.2018.36.4.295