Important Dates

The LSC workshop will take place on Tuesday 9th June 2020, on the first day of the ACM ICMR'20 conference. This will take place at the Radisson Hotel in Golden Lane, Dublin City. The event will take place in the Goldsmith Hall 1 and will be an all-day workshop. The LSC'20 workshop is a full-day workshop with four distinct phases:
- Setup and testing of the interactive search engines, in which each team will be given the opportunity to test their interactive search engine and connect to the competition server prior to the official start of the workshop.
- System / paper presentations, Each participating team will be given the opportunity to present their system at the start of the workshop.
- Expert Session. A closed session in which the expert runs take place.
- Public Session. An open session that is co-timed and co-located with the ICMR'20 welcome reception. This session will include a few expert runs and the novice runs, in which a novice user (one of the ICMR'20 attendees) will use the system of each team to execute a number of queries to quantify how usable each system is for novice users.



Configuration of the Challenge


Each team will be given a desk and chair arranged in an arc around a central screen which shows the queries. The room allocated to the LSC in 2020 is large enough to facilitate a wide arc around the central screen. To see an example of how this was done in pervious years, see the
image library from LSC'18 & 19. Each desk will have a monitor with HDMI connection for each team. This monitor will be full-HD resolution and be either 24 or 27 inches in size. There will also be a dedicated wifi network for the challenge which will facilitate participants to connect to the LSC Server and access the general WWW (if needed).

During the search challenge, participating teams submit an item (image) to a host server for evaluation when a potentially relevant item from the collection is found by a participant. The host server maintains a countdown clock and evaluates in real-time all submissions against a provided groundtruth. For each topic, a score is given based on the time taken to find the relevant content and the number of incorrect items previously submitted by that team to the host server for that topic. There is a maximum of 3 or 5 minutes provided per topic (see below). Throughout the competition, an overall score is maintained for each team, which is the summation of the scores of the topics that had been processed up until that point. The teams will be ranked in terms of overall (expert & novice combined score) and novice score. For more detail on the scoring function and the LSC service, please see the
LSC'18 overview paper.

There will be 24 official English language topics prepared which will be textual in nature and, as in pervious LSC competitions, be temporally enhancing queries. This means that every query will consist of a title and 6 increasingly detailed descriptions, which will be revealed every 30 seconds. For expert runs, a total of 3 minutes will be allocated per topic, but for novice runs, a total of 5 minutes will be provided. The topics will be revealed only at the challenge. The
LSC'19 topics are available for system testing. Please note that the LSC'20 topics are not released in XML format until after the workshop. During the workshop the topics are first seen on the central screen.


Challenge Rules


Participating teams can develop any type of interactive retrieval system that they wish without restriction. Query input to each system is not restricted (save for the rules below). This means that some systems will accept text queries, others will facilitate faceted queries, others may support spoken input, others relevance feedback. There is no limit on how interaction can be facilitated.

Each retrieval system is expected to have pre-indexed the
LSC'20 dataset prior to arriving at the workshop. All aspects of the dataset can be indexed and the integration of additional semantic and concept detectors is allowed, once details of these are provided The rules of participation are intended to facilitate a fair competition and are listed as:
- Each system can be used by one expert or one novice user at a time.
- Taking photos of any image (other team's submissions) from the central screen as query input is prohibited
- During the novice runs, the expert users are not allowed to coach or guide the novices. There will be a dedicated period of time for experts to train novices in how to use their systems.
- Each incorrect submission will be penalised according to the scoring algorithm, so it is recommended not to submit any result to the central server unless the user is confident that the result is likely to be correct.
- The relevance judgements provided with the topic are final and cannot be questioned or queried. Every effort will be made to ensure that relevant judgements are not ambiguous.
- It is prohibited for any system developer (who acts as the expert user for their system) to learn or familiarise themselves sufficiently with the dataset, so as to gain an unfair advantage over other competitors.
- Interaction mechanisms employed should not interfere with, or disturb, other participants. For any participant bringing novel VR-type systems, a dedicated flow space will be prepared for them which is sufficiently distant from the other participants.