Date: Jun., 19th, 2021

In CVPR 2021, I present my paper for audio-visual speech separation. The paper title is “Looking into Your Speech: Learning Cross-modal Affinity for Audio-visual Speech Separation.” I’ve prepared this work from last year, with my colleague Jiyoung Lee. I’d really enjoyed to work on this project, although it was really tough. If you are interested in this work, please visit the links below.

[Project] [Paper]