You are here

ImageCLEFtuberculosis

Motivation

About 130 years after the discovery of Mycobacterium tuberculosis, the disease remains a persistent threat and a leading cause of death worldwide. The greatest disaster that can happen to a patient with tuberculosis (TB) is that the organisms become resistant to two or more of the standard drugs. In contrast to drug sensitive (DS) tuberculosis, its multi-drug resistant (MDR) form is much more difficult and expensive to recover from. Thus, early detection of the drug resistance (DR) status is of great importance for effective treatment. The most commonly used methods of DR detection are either expensive or take too much time (up to several month). Therefore there is a need for quick and at the same time cheap methods of DR detection. One of the possible approaches for this task is based on Computed Tomography (CT) image analysis.
Another challenging task is automatic detection of TB types using CT volumes.
banniere

News

  • 23.5.2017: Deadline for submission of working notes papers by the participants extended until 31 May.
  • 1.2.2017:training data for the two tasks are made available
  • 17.11.2016:first information on the task being available on the web pages

Participant registration

Registration for ImageCLEF 2017 is now open and will stay open until at least 21.04.2017. To register please follow the steps below:

Once registered and the signature validated, data access details can be found in the ImageCLEF system -> Collections. Please note that depending on the task, before downloading the data, you may be required for signing some additional data usage agreements. Should you have any questions about the registration process, please contact Mihai Dogariu <dogariu_mihai8(at)yahoo.com>.

Schedule

  • 15.11.2016: registration opens for all ImageCLEF tasks (until 22.04.2016)
  • 01.02.2017: development data release starts
  • 15.03.2017: test data release starts
  • 05.05.2017: deadline for submission of runs by the participants
  • 15.05.2017: release of processed results by the task organizers
  • 31.05.2017 26.05.2017: deadline for submission of working notes papers by the participants
  • 17.06.2017: notification of acceptance of the working notes papers
  • 01.07.2017: camera ready working notes papers
  • 11.-14.09.2017: CLEF 2017, Dublin, Ireland

Subtasks Overview

The ImageCLEFtuberculosis task includes two independent subtasks.

Subtask #1: MDR Detection

The goal of this subtask is to assess the probability of a TB patient having resistant form of tuberculosis based on the analysis of chest CT scan.

Subtask #2: Detection of TB Type

The goal of this subtask is to automatically categorize each TB case into one of the following five types: Infiltrative, Focal, Tuberculoma, Miliary, Fibro-cavernous.

Data collection

Subtask #1: MDR Detection

For subtask #1, a dataset of 3D CT images is used along with a set of clinically relevant metadata. The dataset includes only HIV-negative patients with no relapses and having one of the two forms of tuberculosis: drug sensitive (DS) or multi-drug resistant (MDR).

# Patients Train Test
DS 134 101
MDR 96 113
Total patients 230 214

Subtask #2: Detection of TB Type

The dataset used in subtask #2 includes chest CT scans of TB patients along with the TB type.

# Patients Train Test
Type 1 140 80
Type 2 120 70
Type 3 100 60
Type 4 80 50
Type 5 60 40
Total patients 500 300

For both subtasks we provide 3D CT images with slice size of 512*512 pixels and number of slices varying from about 50 to 400. All the CT images are stored in NIFTI file format with .nii.gz file extension (g-zipped .nii files). This file format stores raw voxel intensities in Hounsfield units (HU) as well the corresponding image metadata such as image dimensions, voxel size in physical units, slice thickness, etc. A freely-available tool called "VV" can be used for viewing image files. Currently, there are various tools available for reading and writing NIFTI files. Among them there are load_nii and save_nii functions for Matlab and Niftilib library for C, Java, Matlab and Python.

Moreover, for all patients in both subtasks we provide automatic extracted masks of the lungs. This material can be downloaded together with the patients CT images. The details of this segmentation can be found here.
In case the participants use these masks in their experiments, please refer to the section "Citations" at the end of this page to find the appropriate citation for this lung segmentation technique.

Submission instructions

Disclaimer: This section is not final yet and may be subject to changes
Please note that each group is allowed a maximum of 10 runs per subtask.

Subtask #1: MDR Detection

Submit a plain text file named with the prefix MDR (e.g. MDRfree-text.txt) with the following format:

  • <Patient-ID>,<Probability of MDR>

e.g.:

  • MDR_TST_001,0.1
  • MDR_TST_002,1
  • MDR_TST_003,0.56
  • MDR_TST_004,0.02

Please use a score between 0 and 1 to indicate the probability of the patient having MDR

You need to respect the following constraints:

  • Only use numbers between 0 and 1 for the score. Use the dot (.) as a decimal point (no commas accepted)
  • Patient-IDs must be part of the predefined Patient-IDs
  • All patient-IDs must be present in the runfiles


Subtask #2: Detection of TB Type

Submit a plain text file named with the prefix TBT (e.g. TBTfree-text.txt) with the following format:

  • <Patient-ID>,<TB-Type>

e.g.:

  • TBT_TST_501,1
  • TBT_TST_502,3
  • TBT_TST_503,5
  • TBT_TST_504,4
  • TBT_TST_505,2

Please use the following Codes for the TB types:

  • 1 for Infiltrative
  • 2 for Focal
  • 3 for Tuberculoma
  • 4 for Miliary
  • 5 for Fibro-cavernous
  • You need to respect the following constraints:

    • Only use the defined codes for the various TB types
    • Only use one TB type per patient
    • PatientIDs must be part of the predefined Case-IDs
    • All patient-IDs must be present in the runfiles

    Evaluation methodology

    Subtask #1: MDR Detection

    The results will be evaluated using ROC-curves produced from the probabilities provided by participants.

    Subtask #2: Detection of TB Type

    The results will be evaluated using unweighted Cohen’s Kappa (sample Matlab code).

    Results

    DISCLAIMER : The results presented below have not yet been analyzed in-depth and are shown "as is". The results are sorted by descending AUC for Task 1 and descending Kappa for Task 2.
    Task 1 - Multi-drug resistance detection
    Group Name Run Run Type AUC ACC Rank
    MedGIFT MDR_Top1_correct.csv Automatic 0.5825 0.5164 1
    MedGIFT MDR_submitted_topBest3_correct.csv Automatic 0.5727 0.4648 2
    MedGIFT MDR_submitted_topBest5_correct.csv Automatic 0.5624 0.4836 3
    SGEast MDR_LSTM_6_probs.txt Not applicable 0.5620 0.5493 4
    SGEast MDR_resnet_full.txt Not applicable 0.5591 0.5493 5
    SGEast MDR_BiLSTM_25_wcrop_probs.txt Not applicable 0.5501 0.5399 6
    UIIP MDR_supervoxels_run_1.txt Automatic 0.5415 0.4930 7
    SGEast MDR_LSTM_18_wcrop_probs.txt Not applicable 0.5404 0.5540 8
    SGEast MDR_LSTM_21wcrop_probs.txt Not applicable 0.5360 0.5070 9
    MedGIFT MDR_Top2_correct.csv Automatic 0.5337 0.4883 10
    HHU DBS MDR_basecnndo_212.csv Automatic 0.5297 0.5681 11
    SGEast MDR_LSTM_25_wcrop_probs.txt Not applicable 0.5297 0.5211 12
    BatmanLab MDR_submitted_top5.csv Automatic 0.5241 0.5164 13
    HHU DBS MDR_basecnndo_113.csv Automatic 0.5237 0.5540 14
    MEDGIFT UPB MDR_TST_RUN_1.txt Automatic 0.5184 0.5352 15
    BatmanLab MDR_submitted_top4_0.656522.csv Automatic 0.5130 0.5024 16
    MedGIFT MDR_Top3_correct.csv Automatic 0.5112 0.4413 17
    HHU DBS MDR_basecnndo_132.csv Automatic 0.5054 0.5305 18
    HHU DBS MDR_basecnndo_182.csv Automatic 0.5042 0.5211 19
    HHU DBS MDR_basecnndo_116.csv Automatic 0.5001 0.4930 20
    HHU DBS MDR_basecnndo_142.csv Automatic 0.4995 0.5211 21
    HHU DBS MDR_basecnndo_120.csv Automatic 0.4935 0.4977 22
    SGEast MDR_resnet_partial.txt Not applicable 0.4915 0.4930 23
    BatmanLab MDR-submitted_top1.csv Automatic 0.4899 0.4789 24
    BatmanLab MDR_SuperVx_Hist_FHOG_rf_0.648419.csv Automatic 0.4899 0.4789 25
    Aegean Tubercoliosis MDR_DETECTION_EXPORT2.csv Automatic 0.4833 0.4648 26
    BatmanLab MDR_SuperVx_FHOG_rf_0.637994.csv Automatic 0.4601 0.4554 27
    BioinformaticsUA MDR_run1.txt Not applicable 0.4596 0.4648 28
    Task 2 - Tuberculosis type classification
    Group Name Run Run Type Kappa ACC Rank
    SGEast TBT_resnet_full.txt Not applicable 0.2438 0.4033 1
    SGEast TBT_LSTM_17_wcrop.txt Not applicable 0.2374 0.3900 2
    MEDGIFT UPB TBT_T_GNet.txt Automatic 0.2329 0.3867 3
    SGEast TBT_LSTM_13_wcrop.txt Not applicable 0.2291 0.3833 4
    Image Processing TBT-testSet-label-Apr26-XGao-1.txt Automatic 0.2187 0.4067 5
    SGEast TBT_LSTM_46_wcrop.txt Not applicable 0.2174 0.3900 6
    UIIP TBT_iiggad_PCA_RF_run_1.txt Automatic 0.1956 0.3900 7
    MEDGIFT UPB TBT_TEST_RUN_2_GoogleNet_10crops_at_different_scales_.txt Automatic 0.1900 0.3733 8
    SGEast TBT_resnet_partial.txt Not applicable 0.1729 0.3567 9
    MedGIFT TBT_Top1_correct.csv Automatic 0.1623 0.3600 10
    SGEast TBT_LSTM_25_wcrop.txt Not applicable 0.1548 0.3400 11
    MedGIFT TBT_submitted_topBest3_correct.csv Automatic 0.1548 0.3500 12
    BatmanLab TBT_SuperVx_Hist_FHOG_lr_0.414000.csv Automatic 0.1533 0.3433 13
    SGEast TBT_LSTM_37_wcrop.txt Not applicable 0.1431 0.3333 14
    MedGIFT TBT_submitted_topBest5_correct.csv Automatic 0.1410 0.3367 15
    MedGIFT TBT_Top4_correct.csv Automatic 0.1352 0.3300 16
    MedGIFT TBT_Top2_correct.csv Automatic 0.1235 0.3200 17
    BatmanLab TBT_submitted_bootstrap.csv Automatic 0.1057 0.3033 18
    BatmanLab TBT_submitted_top3_0.490000.csv Automatic 0.1057 0.3033 19
    BatmanLab TBT_SuperVx_Hist_FHOG_Reisz_lr_0.426000.csv Automatic 0.0478 0.2567 20
    BatmanLab TBT_submitted_top2_0.430000.csv Automatic 0.0437 0.2533 21
    BioinformaticsUA TBT_run0.txt Not applicable 0.0222 0.2400 22
    BioinformaticsUA TBT_run1.txt Not applicable 0.0093 0.1233 23

    Citations

    • When referring to the ImageCLEFtuberculosis 2017 task general goals, general results, etc. please cite the following publication which will be published by September 2017:
      • Yashin Dicente Cid, Alexander Kalinovsky, Vitali Liauchuk, Vassili Kovalev, Henning Müller, Overview of ImageCLEFtuberculosis 2017 - Predicting Tuberculosis Type and Drug Resistances, CLEF working notes, CEUR, 2017.
      • BibTex:
        @Inproceedings{ImageCLEFoverview2017,
          author = {Dicente Cid, Yashin and Kalinovsky, Alexander and Liauchuk, Vitali and Kovalev, Vassili and and M\"uller, Henning},
          title = {Overview of {ImageCLEFtuberculosis} 2017 - Predicting Tuberculosis Type and Drug Resistances},
          booktitle = {CLEF2017 Working Notes},
          series = {{CEUR} Workshop Proceedings},
          year = {2017},
          volume = {},
          publisher = {CEUR-WS.org $<$http://ceur-ws.org$>$},
          pages = {},
          month = {September 11-14},
          address = {Dublin, Ireland},

        }

    • When using the provided mask of the lungs , please cite the following publication:
      • Yashin Dicente Cid, Oscar A. Jiménez-del-Toro, Adrien Depeursinge, and Henning Müller, Efficient and fully automatic segmentation of the lungs in CT volumes. In: Goksel, O., et al. (eds.) Proceedings of the VISCERAL Challenge at ISBI. No. 1390 in CEUR Workshop Proceedings (Apr 2015)
      • BibTex:

        @inproceedings{DJD2015,

          Title = {Efficient and fully automatic segmentation of the lungs in CT volumes},
          Booktitle = {Proceedings of the {VISCERAL} Anatomy Grand Challenge at the 2015 {IEEE ISBI}},
          Author = {Dicente Cid, Yashin and Jim{\'{e}}nez del Toro, Oscar Alfonso and Depeursinge, Adrien and M{\"{u}}ller, Henning},
          Editor = {Goksel, Orcun and Jim{\'{e}}nez del Toro, Oscar Alfonso and Foncubierta-Rodr{\'{\i}}guez, Antonio and M{\"{u}}ller, Henning},
          Keywords = {CAD, lung segmentation, visceral-project},
          Month = may,
          Series = {CEUR Workshop Proceedings},
          Year = {2015},
          Pages = {31-35},
          Publisher = {CEUR-WS},
          Location = {New York, USA}

        }

    • Organizers

      Acknowledgements