Tentative Timeline
- December 2023: Registration opens for all LifeCLEF challenges Registration is free of charge
- 29 February 2024: Competition Start
- 24 May 2024: Competition Deadline
- 31 May 2024: Deadline for submission of working note papers by participants [CEUR-WS proceedings]
- 21 June 2024: Notification of acceptance of working note papers [CEUR-WS proceedings]
- 8 July 2024: Camera-ready deadline for working note papers.
- 9-12 Sept 2024: CLEF 2024 Grenoble - France
All deadlines are at 11:59 PM CET on a corresponding day unless otherwise noted. The competition organizers reserve the right to update the contest timeline if they deem it necessary.
Results
All teams that provided a runnable code were scored, and their scores are available on HuggingFace.
Official competition results are listed in the Private and Public leaderboards, respectively.
- The 1st place team: upupup (Peng Wang, Yangyang Li, Bao-Feng Tan, Yi-Chao Zhou, Yong Li and Xiu-Shen Wei)
- The 2nd place team: jack-etheredge(Jack Etheredge)
- The 3rd place team: TZTEK
Private Leaderboard Evaluation
(orange = baseline)
Motivation
Creating a robust system to identify snake species from photos is crucial for biodiversity and global health, given the significant impact of venomous snakebites. With over half a million annual deaths and disabilities, understanding the global distribution of 4,000+ snake species through image differentiation enhances epidemiology and treatment outcomes. Despite machines showing accuracy in predictions, especially with long-tailed distributions and 1800 species, challenges persist in neglected regions. The next step involves testing in specific tropical and subtropical countries while considering species' medical importance for more reliable machine predictions.
Snake species identification, challenging for both humans and machines, is hindered by high intra-class and low inter-class variance, influenced by factors like location, color, sex, and age. Visual similarities and mimicry further complicate identification. Incomplete knowledge of species distribution by country and images originating from limited locations adds complexity. Many snake species resemble those from different continents, emphasizing the importance of knowing the geographic origin for accurate identification. Regularization across all countries is vital, considering that no location hosts more than 126 of the 4,000 snake species.
Task Description
The SnakeCLEF challenge aims to be a major benchmark for observation-based snake species identification. The goal of the task is to create a classification model that returns a ranked list of predicted species for each set of images and location (i.e., snake observation) and minimize the danger to human life and the waste of antivenom if a bite from the snake in the image were treated as coming from the top-ranked prediction.
The classification model will have to fit limits for memory footprint and a prediction time limit within a given HuggingFace server instance (Nvidia T4 small 4vCPU, 15GB RAM, 16GB VRAM).
Participation requirements
Publication Track
All registered participants are encouraged to submit a working-note paper to peer-reviewed LifeCLEF proceedings (CEUR-WS) after the competition ends.
This paper must provide sufficient information to reproduce the final submitted runs.
Only participants who submitted a working-note paper will be part of the officially published ranking used for scientific communication.
The results of the campaign appear in the working notes proceedings published by CEUR Workshop Proceedings (CEUR-WS.org).
Selected contributions among the participants will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS).
For detailed instructions, please refer to SUBMISSION INSTRUCTIONS.
A summary of the most important points:
- All participating teams with at least one graded submission, regardless of the score, should submit a CEUR working notes paper.
- Submission of reports is done through EasyChair – please make absolutely sure that the author (names and order), title, and affiliation information you provide in EasyChair match the submitted PDF exactly
- Deadline for the submission of initial CEUR-WS Working Notes Papers (for the peer-review process): 31 May 2024
- Deadline for the submission of Camera Ready CEUR-WS Working Notes Papers:: 8 July 2024
- Templates are available here
- Working Notes Papers should cite both the LifeCLEF 2024 overview paper as well as the PlantCLEF task overview paper, citation information will be added in the Citations section below as soon as the titles have been finalized.
Context
This competition is held jointly as part of:
Data
The development dataset from the previous year will be re-used. The dataset covers 1,784 snake species from around the world, with a minimum of three observations (i.e., multiple images of the same specimen) per species. Additionally, country-species and venomous-species mapping will be provided. The evaluation will be carried out on the same datasets as last year in order to allow direct comparison. Additionally, we will enrich the private test dataset with new data from additional neglected regions to allow testing generalization capabilities. The region won't be known to the participants. Since the competition test set is composed of private images with highly restricted licenses from individuals and natural history museums, the dataset will be undisclosed, and participants will not have access to this data.
Using additional data or metadata is not permitted!
Image Data (Training set)
|
Image Data (Validation and Test sets)
|
Metadata
|
Evaluation process
This competition provides an evaluation ground for the development of methods suitable for not just snake species recognition. We want you to evaluate new bright ideas rather than finishing first on the leaderboard. Thus, same as this year, we will award an authorship / co-authorship on a Journal publication and payment for an OpenAccess fee.
The evaluation process will require you to submit a code on the HuggingFace (Links will be announced.)
Metrics: As last year, we will calculate various metrics. First, we will calculate standard Accuracy and macro averaged F1. Besides, we will calculate venomous species confusion error, i.e., a number of samples with venomous species confused for harmless and divided by the number of venomous species in the test set.
To motivate research in recognition scenarios with uneven costs for different errors, such as mistaking a venomous snake for a harmless one, this year's challenge goes beyond the 0-1 loss common in classification. We make some assumptions to reduce the complexity of the evaluation. We consider that there exists a universal antivenom that is applicable to all venomous snake bites. Furthermore, such antivenom is not harmful when applied to a healthy human. Hence, we will penalize the misclassification of a venomous species with a harmless one but not the other way around. Although this solution is not perfect, it is a first step into a more complex evaluation of snake bites.
Let us consider a function \(p\) such that \(p(s)=1\) if species \(s\) is venomous, otherwise \(p(s)=0\).
For a correct species \(y\) and predicted species \(\hat y\), the loss \(L(y, \hat y)\) is given as follows:
\[L(y, \hat y) = \left\{
\begin{array}{ll}
0 & \text{ if } y = \hat y \\
1 & \text{ if } y \neq \hat y \text{ and } p(y)=0 \text{ and } p(y)=0 \\
2 & \text{ if } y \neq \hat y \text{ and } p(y)=0 \text{ and } p(\hat y)=1 \\
2 & \text{ if } y \neq \hat y \text{ and } p(y)=1 \text{ and } p(\hat y)=1 \\
5 & \text{ if } y \neq \hat y \text{ and } p(y)=1 \text{ and } p(\hat y)=0 \\
\end{array}
\right.
\]
Note: The costs were selected to illustrate a higher cost when a venomous snake is mistaken for a harmless one. We do not claim the selected costs reflect the risks in a practical scenario: practical costs would have to be determined by assessing what exactly follows after the recognition process. One can imagine several aspects, e.g., the health risks of the venom, the cost of the antivenom, and so on.
The challenge metric is the sum of \(L\) overall test observations:
\[ \mathbf L = \sum_i L(y_i, \hat y_i)
\]
The other one will include the overall classification rate (F1) and the venomous species confusion error. The
metric is a weighted average between the macro F1-score and the weighted accuracies of different types of confusion.
\[
M = (w_1 F_1 + w_2 C_{h→h}) + w_3 C_{h→v} + w_4 C_{v→v} + w_5 C_{v→h}) / \sum_i^5 w_i,
\]
where \(w_1=1.0, w_2=1.0, w_3=2.0. w_4=5.0. w_5=2.0\) are the weights of individual terms. \(F_1\) is the macro F1-score,
* \(P_1\) is the percentage of wrongly classified harmless species as another harmless species,
* \(P_2\) is the percentage of wrongly classified harmless species as another venomous species,
* \(P_3\) is the percentage of wrongly classified venomous species as another harmless species, and
* \(P_4\) is the percentage of wrongly classified venomous species as another venomous species.
This metric has a lower bound of 0% and an upper bound of 100%. The lower bound is achieved if you misclassify every species, and furthermore, you misclassify every harmless species as a venomous one and vice-versa. On the other hand, if F1 is 100% (every species is classified correctly), every \(P_i\) must be equal to zero by definition, and 100% will be achieved.
Other Resources
For more SnakeCLEF-related info, please refer to overview papers from previous editions. Besides, you can check out other competitions from the CLEF-LifeCLEF and CVPR-FGVC workshops.
Organizers
Machine Learning
- Lukas Picek, INRIA--Montpellier, France & Dept. of Cybernetics, FAV, University of West Bohemia, Czechia, lukaspicek@gmail.com
- Marek Hruz, Dept. of Cybernetics, FAV, University of West Bohemia, Czechia, mhruz@ntis.zcu.cz
Herpatology
- Andrew Durso, Department of Biological Sciences, Florida Gulf Coast University, Fort Myers, USA, amdurso@gmail.com
Clinical Expert
- Isabelle Bolon, Institute of Global Health, Department of Community Health and Medicine, University of Geneva, Switzerland
Credits
Acknowledgement