Our challenge has come to the final evaluation stage. We have announced our Test dataset, whose download link is in the dataset section. Please be sure to submit your prediction results at the submission page.
As cars become an indispensable part of human daily life, a secure and comfortable driving environment is more and more attractive. The touch-based interaction in the traditional cockpit is easy to distract the driver's attention, leading to inefficient operations and potential security risks. Thus,the concept of intelligent cockpit is gradually on the rise.
The intelligent cockpit aims to achieve a seamless driving experience for people by integrating multimodal intelligent interactions, like speech, gestures, body, etc., with different driving functions, like commands recognition, entertainment, navigation, etc. As a natural human-computer interaction method, a robust speech or command recognition system is crucial to the intelligent cockpit. Although speech recognition has achieved great progress in lots of applications, there are still many challenges in the driving scenario. First of all, the acoustic environment of the cockpit is complex. Since the cockpit is a closed and irregular space, it has special room impulse response (RIR), resulting in special reverberation conditions. In addition, there are various kinds of noise during driving from both inside and outside, such as wind, engine, wheel, background music and interfering speaker, etc. Secondly, the main content of intelligent cockpit speech interaction is the user's command recognition, which includes controlling the air conditioner, playing songs, navigating, etc. These commands may involve a large number of named entities such as contacts, singer names and point of interest (POI).
Nowadays there is a large amount of open-source data for speech recognition, and the model trained with open-source data has achieved good performance in many applications. However, such models often show poor performance in the intelligent cockpit scene because of the special acoustic environment and content characteristics. Therefore, we launch the Intelligent Cockpit Speech Recognition Challenge (ICSRC), in which we will release an intelligent cockpit dataset and aim to explore speech recognition techniques in intelligent cockpit scenes. The corpus consists of 20 hours of real-world recorded data collected by a Hi-Fi microphone placed in a car in different driving conditions. This competition consists of 2 tracks with different limits of model configurations.
We set up two tracks in the challenge for participants to investigate intelligent cockpit speech recognition with different limits on the scope of model size.
Both of the tracks allow the participants to use the training data listed in the dataset section. The participants have to indicate the data used in the final system description paper and describe the data simulation scheme in detail.
The accuracy of the ASR system is measured by Character Error Rate (CER). The CER indicates the percentage of characters that are incorrectly predicted. For a given hypothesis output, it computes the minimum number of insertions (Ins), substitutions (Subs), and deletions (Del) of characters that are required to obtain the reference transcript. Specifically, CER is calculated by
where NIns, NSubs, NDel are the character number of the three errors, and NTotal is the total number of characters. As standard, insertion, deletion, and substitution all account for the errors.
The dataset of the challenge contains 20 hours of speech data in total. It is collected in the new energy vehicle with a Hi-Fi microphone placed on the display screen of the car. During recording, the speakers sit on the passenger seats. The distance between the microphone and speaker is around 0.5m. All speakers are native Chinese speaking Mandarin without strong accents. During driving, the driver may change the driving speed, open windows, and play music, which covers various scenes and conditions. The dataset can be categorized into five categories:
The detailed statistics of the dataset are shown in Table 1.
Category | #Percent |
Air Conditioner | 15% |
Phone Call | 10% |
Music | 15% |
POI | 15% |
Others | 45% |
In this challenge, the dataset is divided into 10 hours for evaluation (Eval set) and 10 hours for scoring and ranking (Test set). Both Eval and Test sets have 50 speakers with balanced gender coverage. The Eval sets will be released to the participants at the beginning of the challenge, while the Test set will be released at the final challenge scoring stage. For the training set, participants are allowed to use only the following open-source corpora of OpenSLR.
All participants should adhere to the following rules to be eligible for the challenge.
Potential participants from both academia and industry should send an email to azhang@nwpu-aslp.org to register to the challenge before or by September 10 with the following requirements:
The organizer will notify the qualified teams to join the challenge via email in 3 working days. The qualified teams must obey the challenge rules.
We provide the baseline system which is implemented based on the WeNet toolkit and you can get the trainning recipes in the repository here.
Participants should submit their results via the submission system. Once the submission is completed, it will be shown in the Leaderboard, and all participants can check their positions. For each track, participants can submit their results no more than 3 times a day.
The ICSRC 2022 final ranking list can be seen from below:
Rank | TeamID | Team Name | Organization | CER(%) |
---|---|---|---|---|
1 | T044 | Tooong | -- | 10.66 |
2 | T002 | 一汽红旗语音(FawAISpeech) | 中国第一汽车集团,研发总院 | 12.67 |
3 | T009 | 勇敢牛牛队 | 悉尼大学 | 13.39 |
4 | T013 | METAWALL_ASR | 北京仙林智能科技有限公司 | 13.57 |
5 | T042 | dun_speech | -- | 14.24 |
6 | T027 | 15.55 | ||
7 | T045 | 15.87 | ||
8 | T039 | 16.59 | ||
9 | T016 | 18.25 | ||
10 | T014 | 20.69 |
Rank | TeamID | Team Name | Organization | CER(%) |
---|---|---|---|---|
1 | T044 | Tooong | -- | 8.94 |
2 | T009 | 勇敢牛牛队 | 悉尼大学 | 9.86 |
3 | T039 | LeVoice | 联想AI Lab | 10.20 |
4 | T002 | 一汽红旗语音(FawAISpeech) | 中国第一汽车集团,研发总院 | 10.21 |
5 | T025 | 阳光初心 | 阳光出行 | 10.91 |
6 | T045 | 11.40 | ||
7 | T027 | 12.21 | ||
8 | T041 | 12.30 | ||
9 | T024 | 12.64 | ||
10 | T050 | 13.21 |
The top ranking teams will be invited to submit challenge papers and accepted papers will be included in the ISCSLP2022 conference proceedings as well as in the challenge session in the technical program.