- Reconstrução 3D: Transformar dados em um modelo 3D (RGB, RGB-D, sonares, lasers, etc...)
- SLAM (ou Visual SLAM)
- Simultaneous Localization and Mapping
- Entrada de vídeo (RGB ou RGB-D) e saída da reconstrução 3D (mapping) e a pose da câmera em cada frame
- SLAM exploits the sequential nature of observation in a robotics setup. It assumes that instead of a unordered set of images, the observations comes from a temporal sequence (aka video stream).
- While many different SLAM solution have been proposed used a combination of proprio- and exteroceptive sensors, going forward, we are going to consider the case of Monocular SLAM, that is, SLAM using just a single camera.
- Monocular SLAM: SLAM using just a single camera.
- Pose: Posição e orientação da câmera (expressada em diversos tipos - matrizes, vetores, ângulos X, Y e Z...)
- Odometria: Odometry in its purest form provides the estimate of motion of a mobile agent by comparing two consecutive sensor observations, which was the case for laser-based odometry. The work visual odometry by Nister et. al. extends this to tracking over a number of image frames, however, the focus is still on the motion instead of the environment representation.
- ICP
- Captura
- Pipeline
- Structure from Motion (SfM): SfM deals with an unordered set of images to recover a model of the environment as well as camera location. A good example of SfM is “Building Rome in a day” by Agarwal et. al.
- Sensors: Senors can be divided into two categories based on whether they measure the outside world or measure themselves, that is measure the internal state of the system.
- Proprioceptive (from Latin proprius meaning ‘own’ + receptive): IMUs, Gyroscopes, compasses. These sensor do not measure any aspect of the environment and therefore are only useful in recovery an estimate of the trajectory of the robot.
- Exteroceptive: Cameras (Mono, Stereo, More-o), Lasers, LIDARs, RGB-D Sensors, Wifi receivers, Light intensity, etc. Anything that can measure some aspect of the outside world that changes with the position/orientation of the robot can theoretically be used as a sensor for SLAM.
Referências
https://towardsdatascience.com/slam-in-the-era-of-deep-learning-e8a15e0d16f3