OpenNMT-tf 2.0: a milestone in the OpenNMT project

OpenNMT-tf 2.0 workshop. Red Kakemonos and French Tech Central logo in front of the entrance door of Station F held in Paris in March 2018.
OpenNMT workshop held in March 2018 at Station F in Paris // Copyright SYSTRAN

SYSTRAN has been wholeheartedly involved in open source development over the past few years via the OpenNMT initiative,whose goal is to build a ready-to-use, fully inclusive, industry and research ready development framework for Neural Machine Translation (NMT). OpenNMT guarantees state-of-the-art systems to be integrated into SYSTRAN products and motivates us to continuously innovate.

In 2017, we published OpenNMT-tf, an open source toolkit for neural machine translation. This project is integrated into SYSTRAN’s model training architecture and plays a key role in the production of the 2nd generation of NMT engines.

Continue reading

Open Source, Multilingual AI and Artificial Neural Networks : The new Holy Grail for the GAFA

Jean Senellart, CTO & CEO of SYSTRAN is explaining how SYSTRAN represent a GAFA alternative when they took benefit from Open Source, Multilingual AI and Artificial Neural Networks. Since 2016, there has been a sharp increase in open source machine translation projects based on neural networks or Neural Machine Translation (NMT) led by companies such as Google, Facebook and SYSTRAN. Why have machine translation and NMT-related innovations become the new Holy Grail for tech companies? And does the future of these companies rely on machine translation?

Never before has a technological field undergone so much disruption in such a short time. Invented in the 1960s, machine translation was first based on grammatical and syntactical rules until 2007. Statistical modelling (known as statistical translation or SMT), which matured particularly due to the abundance of data, then took over. Although statistical translation was introduced by IBM in the 1990s, it took 15 years for the technology to reach mass adoption. Neural Machine Translation on the other hand, only took two years to be widely adopted by the industry after being introduced by academia in 2014, showing the acceleration of innovation in this field. Machine translation is currently experiencing a golden age of technology.

From Big Data to Good Data

Not only have these successive waves of technology differed in their pace of development and adoption, but their key strengths or “core values” have also changed. In rule-based translation, value was brought by code and accumulated linguistic resources. For statistical models, the amount of data was paramount. The more data you had, the better the quality of your translation and your evaluation via the BLEU score (Bilingual Evaluation Understudy, the most widely used algorithm measuring machine translation quality). Now, the move to Machine translation based on neural networks and Deep Learning is well underway and has brought about major changes. The engines are trained to learn language as a child does, progressing step by step. The challenge is not only to process exponential data (Big Data) but more importantly to feed the engines the most qualitative data possible. Hence the interest in “Good data.”

Continue reading