2nd RAMI

Cascade and Field Campaigns

for Aerial Robots

It is our pleasure to announce the second edition of RAMI evaluation campaigns for aerial robots, to be held in conjunction with the International Conference on Unmanned Aircraft Systems (ICUAS) 2023. The competition is inspired by a real use case:

 

Imagine an old large industrial refinery containing numerous pipelines which are subject to high thermal and mechanical stress, and possible leakage due to defects is a potentially toxic environment. To ensure the production and to avoid compromising the health of the operators, you want to make routine inspections very frequently and check the same critical places.

 

This competition aims to test the skills of the proposed aerial robotic solutions in tasks such as precise autonomous navigation, and automatic defect detection with advanced AI algorithms, while operating in a GNSS-denied environment.

 

The competition is open to everyone all around the world and is led by CATEC (Advanced Center for Aerospace Technologies, Seville, Spain), one of the reference research centres in Europe devoted to aerial robotic technologies.

In this edition, the competition will take place in two subsequent stages. First, the cascade evaluation campaign will run from January to April 2023, where teams will develop and test their algorithms on datasets provided by CATEC. Then, the teams with the best performances will be allowed to participate in the field evaluation campaign, which will take place at the ICUAS conference venue in June 2023 in Warsaw, Poland.

It is important to remind that teams must first participate in the cascade evaluation campaign, and register through the ICUAS’23 UAV Competition website.

CASCADE EVALUATION CAMPAIGN

The cascade competition will be evaluated fully virtually using data generated by the competition organizers in previous RAMI campaigns.

There will be two different challenges (Functionality Benchmarks or FBM):

  • FBM1: precise navigation without GNSS, since I&M activities may take place in environments with poor GNSS coverage, or even indoors.
  • FBM2: automatic detection of defects using advanced AI algorithms, which is important for inspectors when they face considerable amounts of data to review.

FBM1

The execution of this FBM consists in assessing the accuracy of a localization system for the autonomous navigation of aerial robots using only onboard sensors. The evaluation will be based on the comparison of the team’s localization solution with respect to a precise motion capture system. Teams will be provided with several sample flight datasets for testing their solutions, with the aerial robot performing specific trajectories. The scoring will be based on the Root Mean Squared Error (RMSE) of the provided trajectory with respect to the ground truth trajectory that the aerial robot followed during an evaluation dataset that was not previously released to the teams. In order to obtain this metric, a .txt file with the estimation of the pose at minimum 1 Hz (at least 10 Hz recommended) must be provided by the solution proposed by teams, containing one line by pose. The poses will be defined by the timestamp, translation (x, y, z) and the orientation in quaternion (x, y, z, w) as the example below:

#timestamp tx ty tz qx qy qz qw

1403636580.013555527 0.0125827899 -0.0015615102 -0.0401530091 -0.0513115190 0.8092916900 0.0008562779 0.5851609600

FBM2

The execution of this FBM consists of assessing the performance of a surface defect detection system. Teams will be provided with a publicly available dataset for training their algorithms. The assessment will be based on an offline analysis of images obtained by the aerial robot, which will show several surface defects artificially placed along with the testing scenario. Critical Success Index or Threat Score metric will be used to assess the performance of each team, comparing the results with ground truth labels. A new .txt file must be provided containing one line per detection performed, containing the image name and detection coordinates in image plane as in the following example of a single detection:

#image_name left top right bottom

frame0005.jpg 213 144 315 172

To ensure equal conditions for every team, a Docker image will be provided with the datasets, a detailed description of how the results must be delivered, and an example of participation for each FBM. This participation example will contain a sample solution, its mandatory running script, and a sample result file.

Each team must install their own dependencies, their solutions with every single file needed to run them, and it must output the results into the indicated path with the format mentioned above. Then, this Docker images must be uploaded to run the competitor's solution with the new final evaluation data.

For more information about the evaluation, please refer to RAMI's Evaluation Plan (section 4.2). Regarding the how to provide the data and the general use of the Docker image, please refer the corresponding documents in the “More Information” box.

IMPORTANT NOTE FOR CASCADE COMPETITION

Since this is a joint event with ICUAS’23 UAV Competition, the team performance and hence their access to the field evaluation campaign is not only measured from the results of the cascade evaluation campaign. There is also an exploration benchmark based on a simulation environment (Gazebo) which is not fully part of RAMI. Teams are strongly encouraged to access the ICUAS’23 UAV Competition website to check additional information. RAMI’s FBMs correspond to benchmarks 2 and 3 in the ICUAS competition.

FIELD EVALUATION CAMPAIGN

This second RAMI Field Campaign for Aerial Robots is organised on the basis of the results from the cascade campaign, where teams propose different algorithms for automatic crack detection with AI, and sensor fusion for precise robot localization without GNSS. The best teams will be announced, and they will qualify to take part of the field competition at the ICUAS conference venue.

The competition will be evaluated according to two functionality benchmarks (FBMs) and one task benchmark (TBMs).

  • FBM-1: precise navigation without GNSS. The aerial robot has to accurately estimate its current state while autonomously navigating through the indoor testbed, making use of any type of onboard sensors except for GNSS devices or magnetometers.
  • FBM-2: automatic detection of defects. The team has to automatically detect defects from offline data analysis of the images gathered during the real flights.
  • TBM-1: punctual inspection in difficult access areas. The aerial robot must have the ability to autonomously navigate unknown areas and find points of interest, scan the area and detect objects of interest (defects), while maintaining a minimum distance to the different obstacles placed over the competition arena.

To ensure equal conditions for every team, instead of proposing teams to develop their own aerial robotic system, the competition organizers will provide one with the capability of autonomously flying indoors. The platform for the competition is a standard quadrotor frame equipped with a ROS-capable Intel NUC computer, a Pixhawk flight controller connected to the NUC via the MAVLink interface, an IMU and an Intel Realsense depth camera. Before the finals start, the organizers are planning a 1-day integration workshop with the aim to familiarize teams with the arena, UAVs, and the competition schedule and procedures.

For more information about the evaluation, please refer to RAMI's Evaluation Plan.

Will your system be able to carry out the inspection successfully and detect possible defects?

Important dates

January 23: Cascade competition kickoff

March 1: Team registration closed at ICUAS’23 website

April 1: Cascade evaluations start

April 15: Deadline to upload solutions, cascade evaluations end

April 26: Results and finalists announced

June 6-9: Field competition at ICUAS’23

Previous RAMI campaigns