Blog(3)
State-of-the-art (SOTA) deep learning-based SLAM methods can be roughly categorized into two classes according to the camera pose estimation process.
It is no longer a secret that everyone meets robots in everyday life. Robots could be found everywhere: from children’s toys and to autonomous vehicles. Robots help us to live by coping with everyday routine tasks, entertaining us, and giving us new abilities. Some of the most useful service robots are human assistant robots performing distant, repetitive and tedious jobs. Wide variety of examples include package delivery robots, robot-waiters in restaurants,
Visual simultaneous localization and mapping (SLAM) inevitably produces the accumulated drift in mapping and localization due to camera calibration errors, feature matching errors, etc. It is challenging to achieve drift-free localization and obtain an accurate global map.
Research Areas(0)
Publications(5)
DH-LC: Hierarchical Matching and Hybrid Bundle Adjustment Towards Accurate and Robust Loop Closure
AuthorXiongfeng Peng, zhihua Liu, Qiang Wang, Yun-Tae Kim, Hong-Seok Lee
PublishedIEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Date2022-12-26
A Comparison of Modern General-Purpose Visual SLAM Approaches
AuthorAlexey Merzlyakov, Steve Macenski
Date2021-09-30
Accurate Visual-Inertial SLAM by Feature Re-identification
AuthorXiongfeng Peng, Zhihua Liu, Qiang Wang, Yun-Tae Kim, Myungjae Jeon
Date2021-09-27
News(6)
Rapid development of hand-held mobile mapping systems [53] incorporating recent advances in SLAM [59] [63] allows the creation of both accurate and precise 3D point cloud and the trajectory.
State-of-the-art deep learning-based SLAM methods can be categorized into two classes based on camera pose estimation processes.
Simultaneous Localization and Mapping (SLAM) is extensively used in Extended Reality(XR) Head Mounted Display(HMD), robots, autonomous driving, and various other fields.
Others(0)