Motivation
The MEDVQA-GI challenge, now in its third iteration, builds upon the insights and experiences of the previous two years. This year, the challenge looks at the integration of Visual Question Answering (VQA) with synthetic gastrointestinal (GI) data, aiming to enhance diagnostic accuracy and learning algorithms. The challenge includes developing algorithms that can interpret and answer questions based on synthetic GI images, creating advanced synthetic images that mimic accurate diagnostic visuals in detail and variability, and evaluating the effectiveness of VQA techniques with both synthetic and real GI data. This approach seeks to foster advancements in synthetic medical image generation and VQA technologies, aiming to improve diagnostic processes and patient outcomes in the gastrointestinal field by leveraging state-of-the-art artificial intelligence methods.
Task Description
We define two subtasks for this year's challenge.
Subtask 1: Algorithm Development for Question Interpretation and Response Participant
This subtask asks participants to build algorithms that can accurately interpret and respond to questions pertaining to gastrointestinal (GI) images. This involves understanding the context and details within the images and providing precise answers that would assist in medical diagnostics.
Subtask 2: Creation of High-Fidelity Synthetic GI Images
This subtask focuses on the generation of synthetic GI images that are highly detailed and variable enough to closely resemble real medical images. The objective is to push the boundaries of current image generation techniques to produce visuals that can be used effectively for training and testing diagnostic systems without the need for real patient data.
Data
The data used for this year will be a combination and extension of the datasets used for the last two years. This includes a large dataset of GI images with VQA annotations.
Links to the datasets will be coming soon.
Evaluation methodology
The evaluation will differ for each task, which will be a combination of subjective and objective evaluations. We will host an online leaderboard for rapid evaluation feedback to nurture a competitive environment.
Participant registration
Please refer to the general ImageCLEF registration instructions
Please also email steven@simula.no to register your interest.
Preliminary Schedule
- dd.mm.2025: Registration opens
- dd.mm.2025: Release of the training and validation sets
- dd.mm.2025: Release of the test sets
- dd.mm.2025: Registration closes
- dd.mm.2025: Run submission deadline
- dd.mm.2025: Release of the processed results by the task organizers
- dd.mm.2025: Submission of participant papers [CEUR-WS]
- dd.mm.2025: Notification of acceptance
- dd.mm.2025: Camera ready copy of participant papers and extended lab overviews [CEUR-WS]
- 09-12.09.2025: CLEF 2025, Madrid, Spain
Submission Instructions
TBA
Results
TBA
CEUR Working Notes
For detailed instructions, please refer to this PDF file. A summary of the most important points:
- All participating teams with at least one graded submission, regardless of the score, should submit a CEUR working notes paper.
- Teams who participated in both tasks should generally submit only one report
Contact
Organizers:
- Steven A. Hicks <steven(at)simula.no>, SimulaMet, Norway
- Sushant Gautam <sushant(at)simula.no>, SimulaMet, Norway
- Michael A. Riegler <michael(at)simula.no>, SimulaMet, Norway
- Vajira Thambawita <vajira(at)simula.no>, SimulaMet, Norway
- Pål Halvorsen <paalh(at)simula.no>, SimulaMet, Norway