Abstract
In recent years, depth cameras have been widely utilized
in camera tracking for augmented and mixed reality. Many
of the studies focus on the methods that generate the
reference model simultaneously with the tracking and
allow operation in unprepared environments. However,
methods that rely on predefined CAD models have their
advantages. In such methods, the measurement errors are
not accumulated to the model, they are tolerant to
inaccurate initialization, and the tracking is always
performed directly in reference model's coordinate
system. In this paper, we present a method for tracking a
depth camera with existing CAD models and the Iterative
Closest Point (ICP) algorithm. In our approach, we render
the CAD model using the latest pose estimate and
construct a point cloud from the corresponding depth map.
We construct another point cloud from currently captured
depth frame, and find the incremental change in the
camera pose by aligning the point clouds. We utilize a
GPGPU-based implementation of the ICP which efficiently
uses all the depth data in the process. The method runs
in real-time, it is robust for outliers, and it does not
require any preprocessing of the CAD models. We evaluated
the approach using the Kinect depth sensor, and compared
the results to a 2D edge-based method, to a depth-based
SLAM method, and to the ground truth. The results show
that the approach is more stable compared to the
edge-based method and it suffers less from drift compared
to the depth-based SLAM.
Original language | English |
---|---|
Number of pages | 18 |
Journal | Journal of Virtual Reality and Broadcasting |
Volume | 13 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2016 |
MoE publication type | A1 Journal article-refereed |
Keywords
- CAD model
- depth camera
- ICP
- KINECT
- mixed reality
- pose estimation
- tracking