You are here

FungiCLEF 2025

SnakeCLEF

Schedule

  • December 2024: Registration opens for all LifeCLEF challenges Registration is free of charge
  • 7 March 2025: Competition Start
  • 12 May 2025: Competition Deadline
  • 31 May 2025: Deadline for submission of working note papers by participants [CEUR-WS proceedings]
  • 23 June 2025: Notification of acceptance of working note papers [CEUR-WS proceedings]
  • 30 June 2025: Camera-ready deadline for working note papers.
  • 9-12 Sept 2025: CLEF 2025 Madrid - Spain

All deadlines are at 11:59 PM CET on a corresponding day unless otherwise noted. The competition organizers reserve the right to update the contest timeline if they deem it necessary.

Motivation

Automatic recognition of fungi species aids mycologists, citizen scientists, and nature enthusiasts in identifying species in the wild while supporting the collection of valuable biodiversity data. To be effective on a large scale, such as in popular citizen science projects, it needs to efficiently predict species with limited resources and handle many classes, some of which have just a few recorded observations. Additionally, rare species are often excluded from training, making it difficult for AI-powered tools to recognize them. Based on our measurements, we recognized that about 20\% of all verified observations (20,000) involve rare or under-recorded species, highlighting the need to accurately identify these species.

Task Description

The FungiCLEF Challenge focuses on few-shot recognition of fungi species using real-world observational data. Each observation includes multiple photographs of the same specimen, along with metadata (e.g., location, timestamp, substrate, habitat, toxicity), satellite imagery, and meteorological variables.

The goal of the challenge is to develop a classification model capable of returning a ranked list of predicted species for each observation. A key challenge lies in handling a large number of species consisting of rare and under-recorded taxa with very few training examples.

Input: A list of fungi observations.
Output: A list of Top-k predicted fungi species from a predefined set of classes.

Participation requirements

Publication Track

All registered participants are encouraged to submit a working-note paper to peer-reviewed LifeCLEF proceedings (CEUR-WS) after the competition ends.
This paper must provide sufficient information to reproduce the final submitted runs.

Only participants who submitted a working-note paper will be part of the officially published ranking used for scientific communication.

The results of the campaign appear in the working notes proceedings published by CEUR Workshop Proceedings (CEUR-WS.org).
Selected contributions among the participants will be invited for publication in the Springer Lecture Notes in Computer Science (LNCS) the following year.

Data

The training and validation dataset is built from fungi observations (i.e., multiple photographs of the same specimen and additional observation metadata) submitted to the Atlas of Danish Fungi before the end of 2023, which were labeled by mycologists. Beyond photographs, the dataset includes a wealth of supplementary data, such as satellite imagery, meteorological records, and structured metadata (e.g., timestamps, GPS coordinates, substrate details, habitat information, and toxicity status). The vast majority of observations have been annotated with most of these attributes.

FungiTastic Dataset Paper describing used data.

More info on a Kaggle competition platform.

Evaluation process

The evaluation metrics for this competition is the standard Recall@k, which is defined as the proportion of instances where the true label is within the top \( k \) predicted labels:

\[
\text{Top-k Accuracy} = \frac{\sum_{i=1}^N \mathbb{1}(y_i \in \hat{Y}_i^k)}{N},
\]

where \( N \) is the total number of samples, \( y_i \) is the true label for the \( i \)-th sample, \( \hat{Y}_i^k \) is the set of top \( k \) predicted labels for the \( i \)-th sample, and \( \mathbb{1}(\cdot) \) is the indicator function.

We set \( k=5 \) for the main evaluation metric.

Organizers

Credits

ZCU-FAV      CTU in Prague      University of Copenhagen      PiVa AI     Inria

Acknowledgement

TAČRMAMBO