End-To-End Lip Synchronisation Based on Pattern Classification

Author You Jin Kim, Hee Soo Heo, Soo-Whan Chung, Bong-Jin Lee
Publication IEEE Spoken Language Technology Workshop (SLT)
Month January
Year 2021
Link [Paper]

ABSTRACT

The goal of this work is to synchronise audio and video of a talking face using deep neural network models. Existing works have trained networks on proxy tasks such as cross-modal similarity learning, and then computed similarities between audio and video frames using a sliding window approach. While these methods demonstrate satisfactory performance, the networks are not trained directly on the task. To this end, we propose an end-to-end trained network that can directly predict the offset between an audio stream and the corresponding video stream. The similarity matrix between the two modalities is first computed from the features, then the inference of the offset can be considered to be a pattern recognition problem where the matrix is considered equivalent to an image. The feature extractor and the classifier are trained jointly. We demonstrate that the proposed approach outperforms the previous work by a large margin on LRS2 and LRS3 datasets.