Machine Translation (MT) is usually viewed as a one-shot process that generates the target language equivalent of some source text from scratch.
We consider here a more general setting which assumes an initial target sequence, that must be transformed into a valid translation of the source, thereby restoring parallelism between source and target.
For this bilingual synchronization task, we consider several architectures (both autoregressive and non-autoregressive) and training regimes, and experiment with multiple practical settings such as simulated interactive MT, translating with Translation Memory (TM) and TM cleaning.
Our results suggest that one single generic edit-based system, once fine-tuned, can compare with, or even outperform, dedicated systems specifically trained for these tasks.
The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022), Dec 2022, Abou Dabi, United Arab Emirates