| ImageCLEF's Wikipedia Retrieval task provides a testbed for the system-oriented evaluation of visual information retrieval from a collection of Wikipedia images. The aim is to investigate retrieval approaches in the context of a large and heterogeneous collection of images (similar to those encountered on the Web) that are searched for by users with diverse information needs.
In 2010, ImageCLEF's Wikipedia Retrieval will use a new collection of over 237,000 Wikipedia images that cover diverse topics of interest. These images are associated with unstructured and noisy textual annotations in English, French, and German.
This is an ad-hoc image retrieval task; the evaluation scenario is thereby similar to the classic TREC ad-hoc retrieval task and the ImageCLEF photo retrieval task: simulation of the situation in which a system knows the set of documents to be searched, but cannot anticipate the particular topic that will be investigated (i.e. topics are not known to the system in advance). The goal of the simulation is: given a textual query (and/or sample images) describing a user's (multimedia) information need, find as many relevant images as possible from the Wikipedia image collection.
Any method can be used to retrieve relevant documents. We encourage the use of both concept-based and content-based retrieval methods and, in particular, multi modal and - new this year - multi lingual approaches that investigate the combination of evidence from different modalities and language resources.
|ImageCLEF 2010 Wikipedia Collection|
The ImageCLEF 2010 Wikipedia collection consists of 237,434 images and associated user-supplied annotations. The collection was built to cover similar topics in English, German and French. Topical similarity was obtained by selecting only Wikipedia articles which have versions in all three languages and are illustrated with at least one image in each version: 44,664 such articles were extracted from the September 2009 Wikipedia dumps, containing a total number of 265,987 images. Since the collection is intended to be freely distributed, we decided to remove all images with unclear copyright status. After this operation, duplicate elimination and some additional cleaning up, the remaining number of images in the collection is 237,434, with the following language distribution:|
-English only: 70,127
-German only: 50,291
-French only: 28,461
-English and German: 26,880
-English and French: 20,747
-German and French: 9,646
-English, German and French: 22,899
-Language undetermined: 8,144
-No textual annotation: 239
The main difference between the ImageCLEF 2010 Wikipedia collection and the INEX MM collection (Westerveld and van Zwol, 2007) used in the previous WikipediaMM tasks is that the multilingual aspect has been reinforced and both mono- and cross-lingual evaluations can be carried out. Another difference is that this year, participants will receive for each image both its user-provided annotation and also links to the article(s) which contain the image. Finally, in order to encourage multi modal approaches, three types of low-level image features were extracted using PIRIA, CEA LIST's image indexing tool (Joint et al., 2004) and are provided to all participants.
(Joint et al., 2004) M. Joint, P.-A. Moëllic, P. Hède, P. Adam. PIRIA: a general tool for indexing, search and retrieval of multimedia content In Proceedings of SPIE, 2004.
(Westerveld and van Zwol, 2007) T. Westerveld and R. van Zwol. The INEX 2006 Multimedia Track. In N. Fuhr, M. Lalmas, and A. Trotman, editors, Advances in XML Information Retrieval:Fifth International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2006, Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence (LNCS/LNAI). Springer-Verlag, 2007.
Two examples that illustrate the images in the collection and their metadata are provided below:|
DOWNLOAD (participants only - the login/password are listed in the "Detail" view of the collection in the ImageCLEF registration system and they are only available to the registered participants who have also signed the End User Agreement)|
| The characteristics of the new Wikipedia collection allow for the investigation of the following objectives:
The topics for ImageCLEF 2010 Wikipedia Retrieval task will include (i) topics based on analysis of a search engine's logs, and (ii) topics used in previous years.
DOWNLOAD (participants only)
The 2010 topics:
The topics are multimedia queries that can consist of a textual and a visual part. Concepts that might be needed to constrain the results should be added to the title field. An example topic in the appropriate format is the following:
<number> 1 </number>
<title xml:lang="en">historic castle <title>
<title xml:lang="de">historisches schloss<title>
<title xml:lang="fr">château fort historique<title>

Therefore, the topics include the following fields:
Experiments are performed as follows: the participants are given topics, these are used to create a query which is used to perform retrieval on the image collection. This process iterates (e.g. maybe involving relevance feedback) until they are satisfied with their runs. Participants might try different methods to increase the number of relevant in the top N rank positions (e.g., query expansion).
Participants are free to experiment with whatever methods they wish for image retrieval, e.g., query expansion based on thesaurus lookup or relevance feedback, indexing and retrieval on only part of the image caption, different models of retrieval, and combining text and content-based methods for retrieval. Given the many different possible approaches which could be used to perform the ad-hoc retrieval, rather than list all of these we will ask participants to indicate which of the following applies to each of their runs (we consider these the "main" dimensions which define the query for this ad-hoc task):
Annotation language: Used to specify the target language (i.e., the annotation set) used for the run: English (EN), German (DE), French (FR) and their combinations.
Query/run type: We distinguish between manual (MAN) and automatic (AUTO) submissions. Automatic runs will involve no user interaction; whereby manual runs are those in which a human has been involved in query construction and the iterative retrieval process, e.g. manual relevance feedback is performed. A nice description on the differences between these types of runs is provided by TRECVID at here
Feedback or Query Expansion: Used to specify whether the run involves query expansion (QE) or feedback (FB) techniques, both of them (QEFB) or none of them (NOFB).
Modality: This describes the use of visual (image), text features or concepts in your submission. A text-only run will have modality text (TXT) and a purely visual run will have modality image (IMG). Combined submissions (e.g., an initial text search followed by a possibly combined visual search) will have as modality: text+image (TXTIMG).
Query field: This specifies the topic fields employed in the run: only the title field of the topic (TITLE); only the example images in the topic (IMG_Q); both the title and image fields (TITLEIMG_Q).
Participants can submit as many system runs as they would like. The submission system will open in the beginning of June.
Participants are required to submit ranked lists of (up to) the top 1000 images ranked in descending order of similarity (i.e. the highest nearer the top of the list). The format of submissions for this ad-hoc task is the TREC format. It can be found here.
Please note that there should be at least 1 document entry in your results for each topic (i.e. if your system returns no results for a query then insert a dummy entry, e.g. 25 1 16019 0 4238 xyzT10af5 ). The reason for this is to make sure that all systems are compared with the same number of topics and relevant documents. Submissions not following the required format will not be evaluated.
The schedule can be found here: