1st RAMI Cascade Campaign

for Aerial Robots

It is our pleasure to announce the first edition of METRICS RAMI competition for aerial robots, to be held in conjunction with IROS 2021 (September 27th – October 1st). This competition will take place fully virtually using data generated during previous RAMI campaigns. The competition is open to everyone all around the world and is led by CATEC (Advanced Center for Aerospace Technologies, Seville, Spain), one of the reference research centres in Europe devoted to aerial robotic technologies.

RAMI competition aims at addressing Inspection and Maintenance (I&M) tasks achieved by aerial and underwater robots, since they are involved in the most promising applications in this sector due to the risks and costs associated with works at height or underwater inspections performed by human operators. This competition will focus on aerial robots only, and the evaluation process will mainly involve tasks related to autonomous navigation and data acquisition for I&M purposes.

There will be two different challenges (Functionality Benchmarks or FBM), and both will be evaluated virtually using data generated by the competition organizers:

FBM1: precise navigation without GNSS, since I&M activities may take place in environments with poor GNSS coverage, or even indoors.

  • The execution of this FBM consists in assessing the accuracy of a localization system for the autonomous navigation of aerial robots using only onboard sensors. The evaluation will be based on the comparison of the team’s localization solution with respect to a precise motion capture system. Teams will be provided with several sample flight datasets for testing their solutions, with the aerial robot performing specific trajectories. The scoring will be based on the Root Mean Squared Error (RMSE) of the provided trajectory with respect to the ground truth trajectory that the aerial robot followed during an evaluation dataset that was not previously released to the teams. In order to obtain this metric, a .txt file with the estimation of the pose at minimum 1 Hz (at least 10 Hz recommended) must be provided by the solution proposed by teams, containing one line by pose. The poses will be defined by the timestamp, translation (x, y, z) and the orientation in quaternion (x, y, z, w) as the example below:

#timestamp tx ty tz qx qy qz qw

1403636580.013555527 0.0125827899 -0.0015615102 -0.0401530091 -0.0513115190 0.8092916900 0.0008562779 0.5851609600

FBM2: automatic detection of defects using advanced AI algorithms, which is important for inspectors when they face considerable amounts of data to review.

  • The execution of this FBM consists of assessing the performance of a surface defect detection system. Teams will be provided with a publicly available dataset for training their algorithms. The assessment will be based on an offline analysis of images obtained by the aerial robot, which will show several surface defects artificially placed along with the testing scenario. Critical Success Index or Threat Score metric will be used to assess the performance of each team, comparing the results with ground truth labels. A new .txt file must be provided containing one line per detection performed, containing the image name and detection coordinates in image plane as in the following example of a single detection:

#image_name left top right bottom

frame0005.jpg 213 144 315 172

For more information about the evaluation please refer to RAMI's Evaluation Plan section 4.2.

 

To ensure equal conditions for every team, a docker container will be provided with 3 rosbags with different sensors information, such as RealSenseD435 images, IMU measurements and the ground truth for FBM1; the defect dataset (training + evaluation data) for FBM2; a detailed description of how the results must be delivered and an example of participation for each Functionality Benchmark.

This participation example will contain a sample solution, its mandatory running script, and a sample result file.

Each team must install their own dependencies, their solutions with every single file needed to run them, and it must output the results into the indicated path with the format mentioned above. Then, this Docker images must be uploaded to run the competitor's solution with the new final evaluation data.

Regarding the offered image specs, the chosen OS is Ubuntu 18.04 Focal Fossa with ROS Melodic, CUDA 11-2, CUDNN8 and darknet installed, containing the aforementioned information. If plan to use Machine Learning algorithms taking advantage of CUDA's libraries, you must have at least a GeForce 400 Series Nvidia GPU to get nvidia-drivers<=460.32.03 working with the Docker image.

To encourage teams' participation, results of the challenges can be uploaded weekly during the last month of the validation stage. Those results will be evaluated and updated to an online scoreboard, where the current position of each team will be disposed.

The documents containing the detailed description of the 1st RAMI Cascade Campaign and the setup, basic usage of Docker, as well as hardware and software requirements, are attached to the "More Information" box.

 

If you are interested in participating, please fill this form

Important dates:

Call for interest deadline: July 10th

Validation stage: July 10th - September 26th

Competition stage: September 27th – 29th

Awards ceremony: September 30th

Photo: © Dose Media