SLT Challenge

Overview

In years two and three, our students work together on an SLT Challenge.

The SLT research community has pioneered the use of international shared task challenges to enable comparable and reproducible research. Examples include those run at Interspeech 2020, SemEval 2021, and the Text Analysis Conference.

Teams work on common datasets and task definitions to compare the performance of algorithms under controlled conditions. Such challenges provide an excellent opportunity to teach core technical skills as well as scientific methodology and rigour.

As part of a CDT challenge team, you will organise and execute a complete challenge once a year. This includes data collection and preparation to distributions of baseline algorithms, scoring tools and assessment of outcomes, and publication in appropriate venues.

Stage one: Challenge review

In this stage, you will work individually to survey challenges that are of interest to you and, ideally, relevant to your PhD topic. From these, you will select one that you’d like to pursue and document that challenge.

You will be able to consider both live and historic challenges. Historic challenges are likely to be more convenient as there will be no external deadlines with which to comply, but you can still publish a paper against it.

Stage two: Challenge team building

In this stage, you will pitch your preferred challenge to the rest of the cohort at a group event. The goal is to entice other members of the cohort to work with you on your challenge.

Following the event, you will finalise your team and confirm an academic who will provide advice to the team.

Stage three: Challenge participation

In this stage, your team will develop the system to address the selected challenge’s problem.

You will meet regularly during this period and call on your academic advisor as necessary. You are encouraged to engage with the challenge as though it was live, following the rules for system training and development, and for final evaluation and submission.

At least one system should be compliant with the challenge rules, but, particularly for historic challenges, you shouldn’t be discouraged from experimenting.

In live challenges, a lot is often learnt after you have finished by playing with the constraints. For example, how much better would performance be if training data constraints were relaxed?

You will document your system according to the challenge’s paper rules and upload your code to a private GitHub repository.

Stage four: Challenge feedback

This stage gives you the opportunity to consider all the cohort’s challenge submissions from the perspective of organisers.

You will receive the system report paper from each of the other teams. You will play the role of the challenge organiser and check that the submission adheres to the challenge rules, and consider whether it could be included in a challenge workshop event.