You are here

ImageCLEFtuberculosis

Welcome to the 2nd edition of the Tuberculosis Task!

Motivation

About 130 years after the discovery of Mycobacterium tuberculosis, the disease remains a persistent threat and a leading cause of death worldwide.

Description

The greatest problem that can happen to a patient with tuberculosis (TB) is that the organisms become resistant to two or more of the standard drugs. In contrast to drug sensitive (DS) tuberculosis, its multi-drug resistant (MDR) form is much more difficult and expensive to recover from. Thus, early detection of the drug resistance (DR) status is of great importance for effective treatment. The most commonly used methods of DR detection are either expensive or take too much time (up to several months). Therefore, there is a need for quick and at the same time cheap methods of DR detection. One of the possible approaches for this task is based on Computed Tomography (CT) image analysis. Another challenging task is automatic detection of TB types (TBT) using CT volumes. In this subtask, five types of tuberculosis are considered: Infiltrative, Focal, Tuberculoma, Miliary and Fibro-cavernous. Lung lesions have different appearance, size and pattern depending on the TB type.

Differences compared to 2017:

  • Both training and test datasets for MDR recognition task (subtask #1) are extended by means of adding several cases with extensively drug-resistant tuberculosis (XDR TB), which is a rare and more severe subtype of MDR TB.
  • In case of TB type detection (subtask #2) the datasets are extended by adding new CT scans of the same patients involved in 2017, and also by introducing CT images of a few new patients.
  • A new task (subtask #3) compared to 2017 is introduced which is dedicated to scoring of severity of TB cases based on chest CT images.

News

  • 22.03.2018: Test set is released.

Participant registration

Please refer to the general ImageCLEF registration instructions

Schedule

  • 08.11.2017: registration opens for all ImageCLEF tasks (until 27.04.2018)
  • 22.01.2018: development data release starts
  • 22.03.2018: test data release starts
  • 01.05.2018: deadline for submitting the participants runs
  • 15.05.2018: release of the processed results by the task organizers
  • 31.05.2018: deadline for submission of working notes papers by the participants
  • 15.06.2018: notification of acceptance of the working notes papers
  • 29.06.2018: camera ready working notes papers
  • 10-14.09.2018: CLEF 2018, Avignon, France

Subtasks Overview

The ImageCLEFtuberculosis task 2018 includes three independent subtasks.

Subtask #1: MDR detection

The goal of this subtask is to assess the probability of a TB patient having resistant form of tuberculosis based on the analysis of chest CT scan.

Subtask #2: TBT classification

The goal of this subtask is to automatically categorize each TB case into one of the following five types: (1) Infiltrative, (2) Focal, (3) Tuberculoma, (4) Miliary, (5) Fibro-cavernous.

Subtask #3: Severity scoring

This subtask is aimed at assessing TB severity score based on chest CT image. The Severity score is a cumulative score of severity of TB case assigned by a medical doctor. Originally, the score varied from 1 ("critical/very bad") to 5 ("very good"). In this subtask, the score value is simplified so that values 1, 2 and 3 correspond to "high severity" class, and values 4 and 5 correspond to "low severity". In the process of scoring, the medical doctors considered many factors like pattern of lesions, results of microbiological tests, duration of treatment, patient's age and some other. The goal of this subtask is to distinguish "low severity" from "high severity" based on the CT image, only.

Data collection

Subtask #1: MDR detection

For subtask #1, a dataset of 3D CT images is used along with a set of clinically relevant metadata. The dataset includes only HIV-negative patients with no relapses and having one of the two forms of tuberculosis: drug sensitive (DS) or multi-drug resistant (MDR). The MDR class includes patients with extensively drug-resistant (XDR) tuberculosis.

# Patients Train Test
DS 134 99
MDR 125 137
Total patients 259 236

Subtask #2: TBT classification

The dataset used in subtask #2 includes chest CT scans of TB patients along with the TB type. Some patients include more than one scan. All scans belonging to the same patient present the same TB type.

# Patients (#CTs) Train Test
Type 1 228 (376) 89 (176)
Type 2 210 (273) 80 (115)
Type 3 100 (154) 60 (86)
Type 4 79 (106) 50 (71)
Type 5 60 (99) 38 (57)
Total patients (CTs) 677 (1008) 317 (505)

Subtask #3: Severity scoring

The dataset for subtask #3 includes chest CT scans of TB patients along with the corresponding severity score (1 to 5) and the severity level designated as "low" and "high".

# Patients Train Test
Low severity 90 62
High severity 80 47
Total patients 170 109

For all subtasks we provide 3D CT images with an image size per slice of 512*512 pixels and number of slices varying from about 50 to 400. All the CT images are stored in NIFTI file format with .nii.gz file extension (g-zipped .nii files). This file format stores raw voxel intensities in Hounsfield units (HU) as well the corresponding image metadata such as image dimensions, voxel size in physical units, slice thickness, etc. A freely-available tool called "VV" can be used for viewing image files. Currently, there are various tools available for reading and writing NIFTI files. Among them there are load_nii and save_nii functions for Matlab and Niftilib library for C, Java, Matlab and Python.

Moreover, for all patients in both subtasks we provide automatic extracted masks of the lungs. This material can be downloaded together with the patients CT images. The details of this segmentation can be found here.
In case the participants use these masks in their experiments, please refer to the section "Citations" at the end of this page to find the appropriate citation for this lung segmentation technique.

Remarks on the automatic lung segmentation: <\b>

The segmentations were manually analysed based on statistics on number of lungs found and size ratio of the lungs. Only those segmentations with anomalies on these statistics were visualized. The code used to segment the patients was improved considering the cases wrong segmented. After all improvements, there are still 24 patients (20 from TBT task and 4 from the MDR task) that could not be properly labelled due to the size and/or damage of one lung. In these cases, the mask only contains the label "1". Moreover, 8 patients were segmented fusing the above mentioned method and a registration-based segmentation.

Submission instructions

Please note that each group is allowed a maximum of 10 runs per subtask.

Subtask #1: MDR detection

Submit a plain text file named with the prefix MDR (e.g. MDRfree-text.txt) with the following format:

  • <Patient-ID>,<Probability of MDR>

e.g.:

  • MDR_TST_001,0.1
  • MDR_TST_002,1
  • MDR_TST_003,0.56
  • MDR_TST_004,0.02

Please use a score between 0 and 1 to indicate the probability of the patient having MDR.

You need to respect the following constraints:

  • Patient-IDs must be part of the predefined Patient-IDs
  • All patient-IDs must be present in the runfiles
  • Only use numbers between 0 and 1 for the score. Use the dot (.) as a decimal point (no commas accepted)


Subtask #2: TBT classification

Submit a plain text file named with the prefix TBT (e.g. TBTfree-text.txt) with the following format:

  • <Patient-ID>,<TB-Type>

e.g.:

  • TBT_TST_501,1
  • TBT_TST_502,3
  • TBT_TST_503,5
  • TBT_TST_504,4
  • TBT_TST_505,2

Please use the following Codes for the TB types:

  • 1 for Infiltrative
  • 2 for Focal
  • 3 for Tuberculoma
  • 4 for Miliary
  • 5 for Fibro-cavernous
  • You need to respect the following constraints:

    • Patient-IDs are obtained as follows:
      • Image-IDs: {TBT_TST_001_01, TBT_TST_001_02, TBT_TST_001_03} --> Patient-ID: TBT_TST_001<\il>
        Image-IDs: {TBT_TST_002_01} --> Patient-ID: TBT_TST_002<\il>
    • Patient-IDs must be part of the predefined Patient-IDs
    • All patient-IDs must be present in the runfiles
    • Only use the defined codes for the various TB types
    • Only use one TB type per patient


    Subtask #3: Severity scoring

    Submit a plain text file named with the prefix SVR (e.g. SVRfree-text.txt) with the following format:

    • <Patient-ID>,<Severity score>,<Probability of "HIGH" severity>

    e.g.:

    • SVR_TST_001,1,0.93
    • SVR_TST_002,3,0.54
    • SVR_TST_003,5,0.1
    • SVR_TST_004,4,0.245
    • SVR_TST_005,2,0.7

    Please use an integer value between 1 and 5 to indicate the severity score.
    Please use a score between 0 and 1 to indicate the probability of the patient having "HIGH" severity (it corresponds to severity scores 1 to 3).

    You need to respect the following constraints:

    • Patient-IDs must be part of the predefined Patient-IDs
    • All patient-IDs must be present in the runfiles
    • Only use one integer value from 0 to 5 for the severity score
    • Only use numbers between 0 and 1 for the probability. Use the dot (.) as a decimal point (no commas accepted)

    Evaluation methodology

    Subtask #1: MDR detection

    The results will be evaluated using ROC-curves produced from the probabilities provided by the participants.

    Subtask #2: TBT classification

    The results will be evaluated using unweighted Cohen’s Kappa (sample Matlab code).

    Subtask #3: Severity scoring

    The results will be evaluated considering this subtask as a binary classification problem and as a regression problem. The classification problem will be evaluated using ROC-curves produced from the probabilities provided by the participants. For the regression problem, mean square error will be used.

    Results

    DISCLAIMER : The results presented below have not yet been analyzed in-depth and are shown "as is". The results are sorted by descending AUC for MDR subtask, by descending Kappa for TBT subtask, and by ascending RMSE for SVR subtask..

    Subtask #1: MDR detection

    Subtask 1 - Multi-drug resistance detection
    Group Name Run AUC Rank_AUC Accuracy Rank_Accuracy
    VISTA@UEvora MDR-Run-06-Mohan-SL-F3-Personal.txt 0.6178 1 0.5593 8
    San Diego VA HCS/UCSD MDSTest1a.csv 0.6114 2 0.6144 1
    VISTA@UEvora MDR-Run-08-Mohan-voteLdaSmoF7-Personal.txt 0.6065 3 0.5424 17
    VISTA@UEvora MDR-Run-09-Sk-SL-F10-Personal.txt 0.5921 4 0.5763 3
    VISTA@UEvora MDR-Run-10-Mix-voteLdaSl-F7-Personal.txt 0.5824 5 0.5593 9
    HHU-DBS MDR_FlattenCNN_DTree.txt 0.5810 6 0.5720 4
    HHU-DBS MDR_FlattenCNN2_DTree.txt 0.5810 7 0.5720 5
    HHU-DBS MDR_Conv68adam_fl.txt 0.5768 8 0.5593 10
    VISTA@UEvora MDR-Run-07-Sk-LDA-F7-Personal.txt 0.5730 9 0.5424 18
    UniversityAlicante MDRBaseline0.csv 0.5669 10 0.4873 32
    HHU-DBS MDR_Conv48sgd.txt 0.5640 11 0.5466 16
    HHU-DBS MDR_Flatten.txt 0.5637 12 0.5678 7
    HHU-DBS MDR_Flatten3.txt 0.5575 13 0.5593 11
    UIIP_BioMed MDR_run_TBdescs2_zparts3_thrprob50_rf150.csv 0.5558 14 0.4576 36
    UniversityAlicante testSVM_SMOTE.csv 0.5509 15 0.5339 20
    UniversityAlicante testOpticalFlowwFrequencyNormalized.csv 0.5473 16 0.5127 24
    HHU-DBS MDR_Conv48sgd_fl.txt 0.5424 17 0.5508 15
    HHU-DBS MDR_CustomCNN_DTree.txt 0.5346 18 0.5085 26
    HHU-DBS MDR_FlattenX.txt 0.5322 19 0.5127 25
    HHU-DBS MDR_MultiInputCNN.txt 0.5274 20 0.5551 13
    VISTA@UEvora MDR-Run-01-sk-LDA.txt 0.5260 21 0.5042 28
    MedGIFT MDR_Riesz_std_correlation_TST.csv 0.5237 22 0.5593 12
    MedGIFT MDR_HOG_std_euclidean_TST.csv 0.5205 23 0.5932 2
    VISTA@UEvora MDR-Run-05-Mohan-RF-F3I650.txt 0.5116 24 0.4958 30
    MedGIFT MDR_AllFeats_std_correlation_TST.csv 0.5095 25 0.4873 33
    UniversityAlicante DecisionTree25v2.csv 0.5049 26 0.5000 29
    MedGIFT MDR_AllFeats_std_euclidean_TST.csv 0.5039 27 0.5424 19
    LIST MDRLIST.txt 0.5029 28 0.4576 37
    UniversityAlicante testOFFullVersion2.csv 0.4971 29 0.4958 31
    MedGIFT MDR_HOG_mean_correlation_TST.csv 0.4941 30 0.5551 14
    MedGIFT MDR_Riesz_AllCols_correlation_TST.csv 0.4855 31 0.5212 22
    UniversityAlicante testOpticalFlowFull.csv 0.4845 32 0.5169 23
    MedGIFT MDR_Riesz_mean_euclidean_TST.csv 0.4824 33 0.5297 21
    UniversityAlicante testFrequency.csv 0.4781 34 0.4788 34
    UniversityAlicante testflowI.csv 0.4740 35 0.4492 39
    MedGIFT MDR_HOG_AllCols_euclidean_TST.csv 0.4693 36 0.5720 6
    VISTA@UEvora MDR-Run-06-Sk-SL.txt 0.4661 37 0.4619 35
    MedGIFT MDR_AllFeats_AllCols_correlation_TST.csv 0.4568 38 0.5085 27
    VISTA@UEvora MDR-Run-04-Mix-Vote-L-RT-RF.txt 0.4494 39 0.4576 38

    Subtask #2: TBT classification

    Subtask 2 - Tuberculosis type classification
    Group Name Run Kappa Rank_Kappa Accuracy Rank_Acc
    UIIP_BioMed TBT_run_TBdescs2_zparts3_thrprob50_rf150.csv 0.2312 1 0.4227 1
    fau_ml4cv TBT_m4_weighted.txt 0.1736 2 0.3533 10
    MedGIFT TBT_AllFeats_std_euclidean_TST.csv 0.1706 3 0.3849 2
    MedGIFT TBT_Riesz_AllCols_euclidean_TST.csv 0.1674 4 0.3849 3
    VISTA@UEvora TBT-Run-02-Mohan-RF-F20I1500S20-317.txt 0.1664 5 0.3785 4
    fau_ml4cv TBT_m3_weighted.txt 0.1655 6 0.3438 12
    VISTA@UEvora TBT-Run-05-Mohan-RF-F20I2000S20.txt 0.1621 7 0.3754 5
    MedGIFT TBT_AllFeats_AllCols_correlation_TST.csv 0.1531 8 0.3691 7
    MedGIFT TBT_AllFeats_mean_euclidean_TST.csv 0.1517 9 0.3628 8
    MedGIFT TBT_Riesz_std_euclidean_TST.csv 0.1494 10 0.3722 6
    San Diego VA HCS/UCSD Task2Submission64a.csv 0.1474 11 0.3375 13
    San Diego VA HCS/UCSD TBTTask_2_128.csv 0.1454 12 0.3312 15
    MedGIFT TBT_AllFeats_AllCols_correlation_TST.csv 0.1356 13 0.3628 9
    VISTA@UEvora TBT-Run-03-Mohan-RF-7FF20I1500S20-Age.txt 0.1335 14 0.3502 11
    San Diego VA HCS/UCSD TBTLast.csv 0.1251 15 0.3155 20
    fau_ml4cv TBT_w_combined.txt 0.1112 16 0.3028 22
    VISTA@UEvora TBT-Run-06-Mix-RF-5FF20I2000S20.txt 0.1005 17 0.3312 16
    VISTA@UEvora TBT-Run-04-Mohan-VoteRFLMT-7F.txt 0.0998 18 0.3186 19
    MedGIFT TBT_HOG_AllCols_euclidean_TST.csv 0.0949 19 0.3344 14
    fau_ml4cv TBT_combined.txt 0.0898 20 0.2997 23
    MedGIFT TBT_HOG_std_correlation_TST.csv 0.0855 21 0.3218 18
    fau_ml4cv TBT_m2p01_small.txt 0.0839 22 0.2965 25
    MedGIFT TBT_AllFeats_std_correlation_TST.csv 0.0787 23 0.3281 17
    fau_ml4cv TBT_m2.txt 0.0749 24 0.2997 24
    MostaganemFSEI TBT_mostaganemFSEI_run4.txt 0.0629 25 0.2744 27
    MedGIFT TBT_HOG_std_correlation_TST.csv 0.0589 26 0.3060 21
    fau_ml4cv TBT_modelsimple_lmbdap1_norm.txt 0.0504 27 0.2839 26
    MostaganemFSEI TBT_mostaganemFSEI_run1.txt 0.0412 28 0.2650 29
    MostaganemFSEI TBT_MostaganemFSEI_run2.txt 0.0275 29 0.2555 32
    MostaganemFSEI TBT_MostaganemFSEI_run6.txt 0.0210 30 0.2429 33
    UniversityAlicante 3nnconProbabilidad2.txt 0.0204 31 0.2587 30
    UniversityAlicante T23nnFinal.txt 0.0204 32 0.2587 31
    fau_ml4cv TBT_m1.txt 0.0202 33 0.2713 28
    LIST TBTLIST.txt -0.0024 34 0.2366 34
    MostaganemFSEI TBT_mostaganemFSEI_run3.txt -0.0260 35 0.1514 37
    VISTA@UEvora TBT-Run-01-sk-LDA-Update-317-New.txt -0.0398 36 0.2240 35
    VISTA@UEvora TBT-Run-01-sk-LDA-Update-317.txt -0.0634 37 0.1956 36
    UniversityAlicante T2SVMFinal.txt -0.0920 38 0.1167 38
    UniversityAlicante SVMirene.txt -0.0923 39 0.1136 39

    Subtask #3: Severity scoring

    Subtask 3 - Severity scoring
    Group Name Run RMSE Rank_RMSE AUC Rank_AUC
    UIIP_BioMed SVR_run_TBdescs2_zparts3_thrprob50_rf100.csv 0.7840 1 0.7025 6
    MedGIFT SVR_HOG_std_euclidean_TST.csv 0.8513 2 0.7162 5
    VISTA@UEvora SVR-Run-07-Mohan-MLP-6FTT100.txt 0.8883 3 0.6239 21
    MedGIFT SVR_AllFeats_AllCols_euclidean_TST.csv 0.8883 4 0.6733 10
    MedGIFT SVR_AllFeats_AllCols_correlation_TST.csv 0.8934 5 0.7708 1
    MedGIFT SVR_HOG_mean_euclidean_TST.csv 0.8985 6 0.7443 3
    MedGIFT SVR_HOG_mean_correlation_TST.csv 0.9237 7 0.6450 18
    MedGIFT SVR_HOG_AllCols_euclidean_TST.csv 0.9433 8 0.7268 4
    MedGIFT SVR_HOG_AllCols_correlation_TST.csv 0.9433 9 0.7608 2
    HHU-DBS SVR_RanFrst.txt 0.9626 10 0.6484 16
    MedGIFT SVR_Riesz_AllCols_correlation_TST.csv 0.9626 11 0.5535 34
    MostaganemFSEI SVR_mostaganemFSEI_run3.txt 0.9721 12 0.5987 25
    HHU-DBS SVR_RanFRST_depth_2_new_new.txt 0.9768 13 0.6620 13
    HHU-DBS SVR_LinReg_part.txt 0.9768 14 0.6507 15
    MedGIFT SVR_AllFeats_mean_euclidean_TST.csv 0.9954 15 0.6644 12
    MostaganemFSEI SVR_mostaganemFSEI_run6.txt 1.0046 16 0.6119 23
    VISTA@UEvora SVR-Run-03-Mohan-MLP.txt 1.0091 17 0.6371 19
    MostaganemFSEI SVR_mostaganemFSEI_run4.txt 1.0137 18 0.6107 24
    MostaganemFSEI SVR_mostaganemFSEI_run1.txt 1.0227 19 0.5971 26
    MedGIFT SVR_Riesz_std_correlation_TST.csv 1.0492 20 0.5841 29
    VISTA@UEvora SVR-Run-06-Mohan-VoteMLPSL-5F.txt 1.0536 21 0.6356 20
    VISTA@UEvora SVR-Run-02-Mohan-RF.txt 1.0580 22 0.5813 31
    MostaganemFSEI SVR_mostaganemFSEI_run2.txt 1.0837 23 0.6127 22
    Middlesex University SVR-Gao-May4.txt 1.0921 24 0.6534 14
    HHU-DBS SVR_RanFRST_depth_2_Ludmila_new_new.txt 1.1046 25 0.6862 8
    VISTA@UEvora SVR-Run-05-Mohan-RF-3FI300S20.txt 1.1046 26 0.5812 32
    VISTA@UEvora SVR-Run-04-Mohan-RF-F5-I300-S200.txt 1.1088 27 0.5793 33
    VISTA@UEvora SVR-Run-01-sk-LDA.txt 1.1770 28 0.5918 27
    HHU-DBS SVR_RanFRST_depth_2_new.txt 1.2040 29 0.6484 17
    San Diego VA HCS/UCSD SVR9.csv 1.2153 30 0.6658 11
    San Diego VA HCS/UCSD SVRSubmission.txt 1.2153 31 0.6984 7
    HHU-DBS SVR_DTree_Features_Best_Bin.txt 1.3203 32 0.5402 36
    HHU-DBS SVR_DTree_Features_Best.txt 1.3203 33 0.5848 28
    HHU-DBS SVR_DTree_Features_Best_All.txt 1.3714 34 0.6750 9
    MostaganemFSEI SVR_mostaganemFSEI.txt 1.4207 35 0.5836 30
    Middlesex University SVR-Gao-April27.txt 1.5145 36 0.5412 35

    Citations

    • When referring to the ImageCLEFtuberculosis 2018 task general goals, general results, etc. please cite the following publication which will be published by September 2018:
      • Yashin Dicente Cid, Vitali Liauchuk, Vassili Kovalev, Henning Müller, Overview of ImageCLEFtuberculosis 2018 - Detecting multi-drug resistance, classifying tuberculosis type, and assessing severity score, CLEF working notes, CEUR, 2018.
      • BibTex:
        @Inproceedings{ImageCLEFTBoverview2018,
          author = {Dicente Cid, Yashin and Liauchuk, Vitali and Kovalev, Vassili and and M\"uller, Henning},
          title = {Overview of {ImageCLEFtuberculosis} 2018 - Detecting multi-drug resistance, classifying tuberculosis type, and assessing severity score},
          booktitle = {CLEF2018 Working Notes},
          series = {{CEUR} Workshop Proceedings},
          year = {2018},
          volume = {},
          publisher = {CEUR-WS.org $<$http://ceur-ws.org$>$},
          pages = {},
          month = {September 10-14},
          address = {Avignon, France}
          }
      • When referring to the ImageCLEF 2018 task general goals, general results, etc. please cite the following publication which will be published by September 2018:
        • Bogdan Ionescu, Henning Müller, Mauricio Villegas, Alba García Seco de Herrera, Carsten Eickhoff, Vincent Andrearczyk, Yashin Dicente Cid, Vitali Liauchuk, Vassili Kovalev, Sadid A. Hasan, Yuan Ling, Oladimeji Farri, Joey Liu¡, Matthew Lungren, Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Liting Zhou, Mathias Lux, Cathal Gurrin, Overview of ImageCLEF 2018: Challenges, Datasets and Evaluation. In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Ninth International Conference of the CLEF Association (CLEF 2018), Avignon, France, LNCS Lecture Notes in Computer Science, Springer (September 10-14 2018)
        • BibTex:
          @inproceedings{ImageCLEF18,
            author = {Bogdan Ionescu and Henning M\"uller and Mauricio Villegas and Alba Garc\'ia Seco de Herrera and Carsten Eickhoff and Vincent Andrearczyk and Yashin Dicente Cid and Vitali Liauchuk and Vassili Kovalev and Sadid A. Hasan and Yuan Ling and Oladimeji Farri and Joey Liu and Matthew Lungren and Duc-Tien Dang-Nguyen and Luca Piras and Michael Riegler and Liting Zhou and Mathias Lux and Cathal Gurrin},
            title = {{Overview of ImageCLEF 2018}: Challenges, Datasets and Evaluation},
            booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction},
            series = {Proceedings of the Ninth International Conference of the CLEF Association (CLEF 2018)},
            year = {2018},
            volume = {},
            publisher = {{LNCS} Lecture Notes in Computer Science, Springer},
            pages = {},
            month = {September 10-14},
            address = {Avignon, France}
            }
      • When using the provided mask of the lungs , please cite the following publication:
        • Yashin Dicente Cid, Oscar A. Jiménez-del-Toro, Adrien Depeursinge, and Henning Müller, Efficient and fully automatic segmentation of the lungs in CT volumes. In: Goksel, O., et al. (eds.) Proceedings of the VISCERAL Challenge at ISBI. No. 1390 in CEUR Workshop Proceedings (Apr 2015)
        • BibTex:

          @inproceedings{DJD2015,

            Title = {Efficient and fully automatic segmentation of the lungs in CT volumes},
            Booktitle = {Proceedings of the {VISCERAL} Anatomy Grand Challenge at the 2015 {IEEE ISBI}},
            Author = {Dicente Cid, Yashin and Jim{\'{e}}nez del Toro, Oscar Alfonso and Depeursinge, Adrien and M{\"{u}}ller, Henning},
            Editor = {Goksel, Orcun and Jim{\'{e}}nez del Toro, Oscar Alfonso and Foncubierta-Rodr{\'{\i}}guez, Antonio and M{\"{u}}ller, Henning},
            Keywords = {CAD, lung segmentation, visceral-project},
            Month = may,
            Series = {CEUR Workshop Proceedings},
            Year = {2015},
            Pages = {31-35},
            Publisher = {CEUR-WS},
            Location = {New York, USA}
            }
      • Organizers

        • Vassili Kovalev <vassili.kovalev(at)gmail.com>, Institute for Informatics, Minsk, Belarus
        • Henning Müller <henning.mueller(at)hevs.ch>, University of Applied Sciences Western Switzerland, Sierre, Switzerland
        • Vitali Liauchuk <vitali.liauchuk(at)gmail.com>, Institute for Informatics, Minsk, Belarus
        • Yashin Dicente Cid <yashin.dicente(at)hevs.ch>, University of Applied Sciences Western Switzerland, Sierre, Switzerland

        Acknowledgements