Real-time depth camera tracking with CAD models and ICP

Otto Korkalo, Svenja Kahn

    Research output: Contribution to journalArticleScientificpeer-review

    Abstract

    In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.
    Original languageEnglish
    Number of pages18
    JournalJournal of Virtual Reality and Broadcasting
    Volume13
    Issue number1
    DOIs
    Publication statusPublished - 2016
    MoE publication typeA1 Journal article-refereed

    Fingerprint

    Computer aided design
    Cameras
    Measurement errors
    Sensors

    Keywords

    • CAD model
    • depth camera
    • ICP
    • KINECT
    • mixed reality
    • pose estimation
    • tracking

    Cite this

    @article{3c2d0b0241674106a07c0bbb430c8a77,
    title = "Real-time depth camera tracking with CAD models and ICP",
    abstract = "In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.",
    keywords = "CAD model, depth camera, ICP, KINECT, mixed reality, pose estimation, tracking",
    author = "Otto Korkalo and Svenja Kahn",
    note = "SDA: SHP: Pro-Io-T",
    year = "2016",
    doi = "10.20385/1860-2037/13.2016.1",
    language = "English",
    volume = "13",
    journal = "Journal of Virtual Reality and Broadcasting",
    issn = "1860-2037",
    number = "1",

    }

    Real-time depth camera tracking with CAD models and ICP. / Korkalo, Otto; Kahn, Svenja.

    In: Journal of Virtual Reality and Broadcasting, Vol. 13, No. 1, 2016.

    Research output: Contribution to journalArticleScientificpeer-review

    TY - JOUR

    T1 - Real-time depth camera tracking with CAD models and ICP

    AU - Korkalo, Otto

    AU - Kahn, Svenja

    N1 - SDA: SHP: Pro-Io-T

    PY - 2016

    Y1 - 2016

    N2 - In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.

    AB - In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.

    KW - CAD model

    KW - depth camera

    KW - ICP

    KW - KINECT

    KW - mixed reality

    KW - pose estimation

    KW - tracking

    U2 - 10.20385/1860-2037/13.2016.1

    DO - 10.20385/1860-2037/13.2016.1

    M3 - Article

    VL - 13

    JO - Journal of Virtual Reality and Broadcasting

    JF - Journal of Virtual Reality and Broadcasting

    SN - 1860-2037

    IS - 1

    ER -