Speaker
Description
Lunar exploration is rapidly advancing, driven by increasing international efforts and initiatives. Surface rovers are part of these missions, playing a crucial role in scientific research and resource prospecting. Autonomous rover navigation in lunar environment relies heavily on visual navigation techniques, such as Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM). These methods enable rovers to estimate their position and map the surroundings without external references, ensuring autonomous operations.
This paper describes the Interactive Mission Modeling, Visualization/Validation (IMMV2) tool, a high-fidelity Mission Digital Twin (MDT), developed within the Telespazio eXtended Reality Lab (XR-Lab) that enables testing and validation of navigation algorithms to support the design and modeling of missions. The Visual Scenario Generator (VSG) module of the IMMV2 provides a realistic representation of the lunar surface, incorporating detailed illumination models and terrain synthetic images derived from the Lunar Reconnaissance Orbiter (LRO) mission data. The original 5m/pixel Digital Elevation Model (DEM) is augmented to a resolution of 10 cm/pixel to allow rover-scale simulations, integrating fractal noise maps for replicating surface roughness and high-resolution terrain textures for added realism with synthetically generated rocks and craters to simulate potential hazards (Figure 1 right). Additionally, the simulation includes sensors model configuration, such as stereo cameras or LiDAR, with user-customizable parameters allowing flexibility in testing different setups. The MDT also allows real-time sensors switching on/off to simulate unexpected failures and test the user asset responsiveness.
The study presents a rover application whose model is based on NASA Perseverance, featuring a six-wheeled kinematic configuration with wheel slippage and steering constraints, Figure 1 (left). The reference navigation path is generated using an A* algorithm that optimizes for minimum distance, while avoiding areas with slopes greater than 20°. The VO component estimates the rover’s motion by tracking visual features across sequential images, Figure 2, [Ref.1-Ref.2]. Features extraction is performed using the Oriented FAST and Rotated BRIEF (ORB) technique [Ref.3], while feature tracking relies on Hamming distance comparisons. In parallel, the SLAM algorithm constructs a map of the environment and refines the rover’s estimated trajectory by integrating newly detected and previously observed features. A hazard detection module, employing a YOLO deep neural network [Ref.4], trained on IMMV2 synthetic and auto-generated rocks/craters dataset, identifies obstacles such as rocks and boulders, enabling autonomous path adjustments. The IMMV2 tool streamlines the process of training and validating these object-detection models by providing automatically labeled datasets.
Leveraging this high-fidelity simulation environment, the study evaluates the performance of visual navigation algorithms under realistic lunar conditions and lay the foundations for the integration of inertial sensors and further external observables (e.g., GNSS-like lunar navigation constellation). The customizable environment ensures that VO and SLAM techniques can be tested to support resilient, long-duration autonomous rover operations in complex lunar landscapes. In conclusion, this work evaluates the robustness and accuracy of novel multi-sensor navigation strategy integrating, in a high-resolution simulated environment, vision-based algorithms with wheel odometry and inertial sensors, reducing reliance on ground control and enabling safe exploration of the Moon surface.