Welcome to the website of the Liver CT Annotation challenge!
News
- 01.11.2013: Web page for the challenge is open
- 16.05.2014: Processed submission results are released
Schedule
- 01.11.2013: Registration opens (register here)
- 01.12.2013: Development data is released (instructions to access the data)
- 01.04.2014: Test data is released
- 01.04.2014: Submission system opens (instructions for submission)
- 08.05.2014: Submission system closes
- 15.05.2014: Processed submission results are released (See the scores)
- 07.06.2014: Deadline for submission of working notes papers
- 30.06.2014: Deadline for submission of camera ready working notes papers
- 15-18.09.2014: CLEF 2014, Sheffield, UK
Results
Liver CT annotation Challenge |
Groups |
# | Group | Completeness | Accuracy | Total Score |
1 | BMET | 0.98 | 0.91 | 0.94 |
2 | CASMIP | 0.95 | 0.91 | 0.93 |
3 | piLabVAVlab | 0.51 | 0.39 | 0.45 |
|
Runs |
# | Group | SCORE TOTAL | Run name |
1 | BMET | 0.935 | 1399424282862__run1 |
2 | BMET | 0.939 | 1399424363852__run2 |
3 | BMET | 0.933 | 1399424579007__run3 |
4 | BMET | 0.939 | 1399424789901__run4 |
5 | BMET | 0.947 | 1399425574776__run5 |
6 | BMET | 0.927 | 1399425819829__run6 |
7 | BMET | 0.947 | 1399425962557__run7 |
8 | BMET | 0.926 | 1399426061247__run8 |
9 | CASMIP | 0.935 | 1399726162087__IMAGE_CLEF_SUBMIT |
10 | piLabVAVlab | 0.450 | 1399746005240__Test_UsE_01 |
|
Ratio of correct answers in separate groups |
Group | BMET_1 | BMET_2 | BMET_3 | BMET_4 | BMET_5 | BMET_6 | BMET_7 | BMET_8 | CASMIP | piLabVAVlab |
Liver | 0.9125 | 0.9250 | 0.9250 | 0.9250 | 0.9250 | 0.8000 | 0.9250 | 0.9250 | 0.9250 | 0.4875 |
Vessel | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 0.4211 |
LesionArea | 0.7231 | 0.7077 | 0.7000 | 0.7077 | 0.7308 | 0.6615 | 0.7308 | 0.6308 | 0.6846 | 0.0923 |
LesionLesion | 0.7200 | 0.7700 | 0.7200 | 0.7700 | 0.8300 | 0.7900 | 0.8300 | 0.7300 | 0.7400 | 0 |
LesionComponent | 0.9250 | 0.9375 | 0.9250 | 0.9375 | 0.9375 | 0.9313 | 0.9375 | 0.9188 | 0.9375 | 0.0188 |
|
Task overview
The medical, specifically the radiological, databases are challenged by the exponential increase in data volumes, similar to several other domains such as multimedia data available on the internet, while posing some peculiar characteristics that are not shared with other domains. While there is a rich source of metadata associated with radiological images, the subtle differences between images are usually critical. Structured reporting in radiology is a viable approach to effectively represent these subtle differences on common grounds, by exploiting the domain knowledge. It would not only boost search and retrieval performance in structured radiological databases for the purposes of comparative diagnosis, medical education, etc., but also improve the clinical workflow by means of facilitating standardized reports.
Designed as a pilot study towards automated structured reporting, the task is computer aided automatic annotation of liver CT volumes by filling in a pre-prepared form. More specifically, the participants are asked to use image features to answer a set of multiple-choice questions that are automatically generated (and are fixed throughout the task) using an open-source ontology of liver for radiology (*ONLIRA). The questions are about the liver itself, the hepatic vasculature of the liver and a selected lesion in the liver. The CT data, the liver mask and a volume-of-interest enclosing the selected lesion is provided together with a rich set of image features and ONLIRA based manual annotations (ie. a completed form) for training. The participants may use the provided image features and/or use their own image features that they can extract from the provided data. There will be 50 training datasets.
*Download ONLIRA at: http://www.vavlab.ee.boun.edu.tr/pages.php?p=research/CARERA/carera.html
Registering for the task and accessing the data
To participate in this task, please register by following the instructions found in the main ImageCLEF 2014
webpage.
Following the approval of registration with the task, the participants will be given access rights to download the data files together with README_liverCT.pdf which includes an explanation of the contents of the Matlab data files. Each training dataset is a single Matlab data file, named as TrainingCase_X.mat.
Briefly, each Matlab data file contains:
- CT: A 3D matrix of real valued cropped CT data. The volume includes liver only.
- LiverMask: A 3D binary valued matrix of the same size as CT. It marks the liver voxels.
- Lesion_VOI: A 6D vector of the coordinates of 2 diagonal corners of the lesion’s bounding box, in comma separated string.
- CoG: N x 4 cell array of image features, representing N=60 features. The first cell column is the feature's group (string), the second column is the feature's name (string), the third column is the feature's type (string) and the forth one is the feature's value (of_type).
- Eg. Liver : LiverVolume : double : 12987.6
- Eg. Liver : HaarWaveletCoeff:VectorOfDouble : '8.4,3.9,2.1,2.9,8,7,1,2'
- UsE: M x 6 cell array of semantic (ONLIRA based) annotations, representing M=75 annotations. The first column is the annotation's group (string), the second column is the annotation's concept (string), the third column is the annotation’s properties (string), the fourth column is the index of annotation value (integer, starting from 0), the fifth one is the corresponding annotation text (string) and the sixth one is free text (string) to be used with the “other” option when needed.
- Eg. Liver : Right Lob : Right Lobe Size Change : normal : 2 : ' '
- Eg. Lesion : Lesion : Contrast Uptake: other : 5 : 'need to see other phases'
- Note: The “other” option, when selected, will be regarded as “undecided”.
The data is collected as part of the CaReRa project using a web-based uploading and manual annotation service. The participants may login to the CaReRa system in demo mode to browse the whole system, and examine the forms themselves. Note that the forms in CaReRa contain a richer set of metadata aimed for “complete case” representation. This task only covers the “Imaging Observations” in CaReRa forms. The project web page is accessible at:
www.vavlab.ee.boun.edu.tr --> Research --> CaReRa
Submission instructions
The submissions will be received through the ImageCLEF 2014 system, going
to "Runs" and then "Submit run" and select track "ImageCLEF Liver CT Annotation".
The participants are asked to submit a Matlab data file. This file includes 10 matrices named as “TestCase_X_UsE”, where X indicates the index of test case and includes ONLY “UsE” features with the same format as training UsE data.
Evaluation methodology
The evaluation will be based on the completeness and accuracy of the computer annotations with reference to the manual annotations of the test dataset. The percentage of the total automatic annotations (the percentage of questions that could be automatically answered) and the percentage of correct annotations will be used for evaluation. In case of multiple (manual / reference) answers to questions, having one of them automatically selected / annotated by the participant’s system, will be sufficient to mark it as a correct annotation. The annotations with the reference annotation “other” will be omitted from the evaluation as they refer to cases for which a reliable annotation was not possible.
Organizers
- Burak Acar, PhD, Bogazici University, EE Dept., Istanbul, Turkey,
acarbu@boun.edu.tr
- Suzan Uskudarli, PhD, Bogazici University, CmpE Dept., Istanbul, Turkey, suzan.uskudarli@boun.edut.tr
- Neda Marvasty,Bogazici University, EE Dept., Istanbul, Turkey, neda.barzegarmarvasti@boun.edu.tr (Contact Person)
Acknowledgments
Rustu Turkay, MD: Data collection and manual annotations
Baris Bakir, MD: Data collection
Nadin Kokciyan, MS: Ontology development
Pinar Yolum, PhD: Ontology development
Abdulkadir Yazici, MS: Web Developement