Experiments are performed as follows: the participants are given topics, these are used to create a query which is used to perform retrieval on the image collection. This process iterates (e.g. maybe involving relevance feedback) until they are satisfied with their runs. Participants might try different methods to increase the number of relevant in the top N rank positions (e.g., query expansion).
Participants are free to experiment with whatever methods they wish for image retrieval, e.g., query expansion based on thesaurus lookup or relevance feedback, indexing and retrieval on only part of the image caption, different models of retrieval, and combining text and content-based methods for retrieval. Given the many different possible approaches which could be used to perform the ad-hoc retrieval, rather than list all of these we will ask participants to indicate which of the following applies to each of their runs (we consider these the "main" dimensions which define the query for this ad-hoc task):
Dimension | Available Codes |
Topic language |
EN |
Annotation language | EN |
Query/run type | AUTO, MAN |
Feedback/expansion | FB, QE, FBQE, NOFB |
Modality | IMG, TXT, CON, TXTIMG, TXTCON, IMGCON, TXTIMGCON |
Topic field | TITLE, IMG_Q, TITLEIMG_Q |
Query language:
Used to specify the query language used in the run. Only English queries will be provided this year, so the language code indicating the query language should be English (EN).
Annotation language:
Used to specify the target language (i.e., the annotation set) used for the run. Only English annotation will be provided this year, so the language code indicating the target language should be English (EN).
Query/run type:
We distinguish between manual (MAN) and automatic (AUTO) submissions. Automatic runs will involve no user interaction; whereby manual runs are those in which a human has been involved in query construction and the iterative retrieval process, e.g. manual relevance feedback is performed. A nice description on the differences between these types of runs is provided by TRECVID at here
Feedback or Query Expansion:
Used to specify whether the run involves query expansion (QE) or feedback (FB) techniques, both of them (QEFB) or none of them (NOFB).
Modality:
This describes the use of visual (image), text features or concepts in your submission. A text-only run will have modality text (TXT), a concept-only run will have a modality concept (CON), and a purely visual run will have modality image (IMG). Combined submissions (e.g., an initial text search followed by a possibly combined visual search) will have as modality any combination thereof: text+image (TXTIMG), text+concept (TXTCON), image+concept (IMGCON), text+image+concept (TXTIMGCON).
Query field:
This specifies the topic fields employed in the run: only the title field of the topic (TITLE); only the example images in the topic (IMG_Q); both the title and image fields (TITLEIMG_Q).
|