METRICS concept: competitions as evaluation campaigns

METRICS competitions’ main features

Taking inspiration from past and current competition activities, based on a consortium of experts in robotics competitions and metrology, METRICS is designed to organize competitions as realistic yet reproducible and objective evaluation campaigns, in field conditions and on datasets, and aims at structuring in a sustainable way the European robotics and AI community around the four priority areas.

Main features of METRICS competitions:

  • Scientific: While preserving the demonstration aspect typically associated with competitions, METRICS competitions are based on the scientific criteria of objectivity, repeatability and reproducibility;
  • Benchmark based: The robots are evaluated through benchmarks; they perform well specified tests in realistic environments or on databases, and their performance is assessed by applying quantitative metrics;
  • Modular: METRICS comprises two categories of benchmarks: Functionality Benchmarks (FBMs) which focus on specific capabilities required for the target application, and Task Benchmarks (TBMs) which combine multiple Functionalities for the execution of complex activities;
  • Periodical: METRICS competitions are organized as recurring events offering each time a similar evaluation framework (similar testbeds, similar testing datasets, same evaluation tools, etc.). It enables the monitoring of the technological progress across the evaluation campaigns;
  • Structured: The competition is structured to optimize effort and maximise impact: as an event for public dissemination (demonstration value), as a matchmaking event to connect participants with complementary competencies (e.g., a research group and a company), and as a scientific endeavour (providing the scientific community with a stable set of benchmarking experiments, which enables objective comparison of research results and can act as the seed for the definition of standards);
  • Synergic: METRICS builds on the well established framework originally created by RoCKIn and subsequently validated, perfected and extended by RockEU2 and SciRoc. METRICS uses the same methodological foundation and practical experience underpinning the successful ERL competition, and enables fruitful synergies between the ERL and METRICS;
  • Open: Through its partners, METRICs creates a network which stimulates and supports end users and industry engagement in the design, implementation and evaluation of robotic benchmarks. METRICS will produce and make publicly available high quality evaluation tools and annotated datasets that research and industry can use to develop and fine tune their own algorithms, systems and products. Existing and prospective actors gain access to difficult to obtain data with associated ground truth and to validated evaluation tools. Importantly, these METRICS by products benefit the competition and promote its long term sustainability: users of the METRICS open data and tools will naturally be inclined to participate to the competitions, thus creating a virtuous circle enabling the success of the competitions.