Experiments are performed as follows: the participants are given topics, these are used to create a query which is used to perform retrieval on the image collection. This process iterates (e.g. maybe involving relevance feedback) until they are satisfied with their runs. Participants might try different methods to increase the number of relevant in the top N rank positions (e.g., query expansion).
Participants are free to experiment with whatever methods they wish for image retrieval, e.g., query expansion based on thesaurus lookup or relevance feedback, indexing and retrieval on only part of the image caption, different models of retrieval, and combining text and content-based methods for retrieval. Given the many different possible approaches which could be used to perform the ad-hoc retrieval, rather than list all of these we will ask participants to indicate which of the following applies to each of their runs (we consider these the "main" dimensions which define the query for this ad-hoc task):
Dimension | Available Codes |
Topic language |
EN |
Annotation language | EN |
Query/run type | AUTO |
Feedback/expansion | FB, QE, FBQE, NOFB |
Modality | IMG, TXT, CON, IMGCON, TXTCON, TXTIMG, TXTIMGCON |
Query language:
Used to specify the query language used in the run. Only English queries will be provided this year, so the language code indicating the query language should be English (EN).
Annotation language:
Used to specify the target language (i.e., the annotation set) used for the run. Only English annotation will be provided this year, so the language code indicating the target language should be English (EN).
Query/run type:
We distinguish between manual (MAN) and automatic (AUTO) submissions. Automatic runs will involve no user interaction; whereby manual runs are those in which a human has been involved in query construction and the iterative retrieval process, e.g. manual relevance feedback is performed. We encourage groups who want to investigate manual intervention further to participate in the interactive evaluation (iCLEF) task.
Feedback or Query Expansion:
Used to specify whether the run involves query expansion (QE) or feedback (FB) techniques, both of them (QEFB) or none of them (NOFB).
Modality:
This describes the use of visual (image), text features or concepts in your submission. A text-only run will have modality text (TXT); a purely visual run will have modality image (IMG), a concept-based run will have modality concept (CON) and a combined submission (e.g. initial text search followed by a possibly combined visual search) will have as modality any combination thereof: text+image(TXTIMG), text+concept (TXTCON), image+concept(IMGCON), and text+image+concept(TXTIMGCON).
|