Home
3-D Reconstruction and Stereo Self-Calibration for Augmented Reality
Barnes and Noble
3-D Reconstruction and Stereo Self-Calibration for Augmented Reality
Current price: $61.00
Barnes and Noble
3-D Reconstruction and Stereo Self-Calibration for Augmented Reality
Current price: $61.00
Size: OS
Loading Inventory...
*Product information may vary - to confirm product availability, pricing, shipping and return information please contact Barnes and Noble
The main focus of this work is the development of new methods for the self-calibration of a rigid stereo camera system. However, many of the algorithms introduced here have a wider impact, particularly in robot hand-eye calibration with all its different areas of application. Stereo self-calibration refers to the computation of the intrinsic and extrinsic parameters of a stereo rig using neither a priori knowledge on the movement of the rig nor on the geometry of the observed scene. The stereo parameters obtained by self-calibration, namely rotation and translation from left to right camera, are used for computing depth maps for both images, which are applied for rendering correctly occluded virtual objects into a real scene (Augmented Reality). The proposed methods were evaluated on real and synthetic data and compared to algorithms from the literature. In addition to a stereo rig, an optical tracking system with a camera mounted on an endoscope was calibrated without a calibration pattern using the proposed extended hand-eye calibration algorithm. The self-calibration methods developed in this work have a number of features, which make them easily applicable in practice: They rely on temporal feature tracking only, as this monocular tracking in a continuous image sequence is much easier than left-to-right tracking when the camera parameters are still unknown. Intrinsic and extrinsic camera parameters are computed during the self-calibration process, i.e., no calibration pattern is required. The proposed stereo self-calibration approach can also be used for extended hand-eye calibration, where the eye poses are obtained by structure-from-motion rather than from a calibration pattern. An inherent problem to hand-eye calibration is that it requires at least two general movements of the cameras in order to compute the rigid transformation. If the motion is not general enough, only a part of the parameters can be obtained, which would not be sufficient for computing depth maps. Therefore, a main part of this work discusses methods for data selection that increase the robustness of hand-eye calibration. Different new approaches are shown, the most successful ones being based on vector quantization. The data selection algorithms developed in this work can not only be used for stereo self-calibration, but also for classic robot hand-eye calibration, and they are independent of the actually used hand-eye calibration algorithm.