As part of our webinar series, one of our latest broadcast discussed and demonstrated the unique and innovative Language I/O + SYSTRAN solution, created in collaboration with our partner company Language I/O.
Hosted by J. Obakhan from SYSTRAN and Heather Shoemaker, CEO of Language I/O, the webinar discussed the power of integrating machine language translation technology into the customer care workflow.
Our webinar “Get More From SPNS9” on May 15th, 2020 was a huge success. The webinar demonstrated 6 new exciting upgrades to the SYSTRAN Pure Neural Server 9.6’s, further scaling its technological capabilities. Thank you to those who joined us.
In this post, we have compiled the highlights from the presentation and answers to the questions we receive after.
The minds behind SYSTRAN sit down for an interview regarding the complexities and the capacities of specialized neural machine translation engines.
Participants: Peter Zoldan, Senior Data Engineer -Software Engineer Linguistic Program, Svetlana Zyrianova, Linguistic Program, Petra Bayrami, Jr. Software Engineer – Linguistic Program, Natalia Segal, R&D Engineer.
How much data is required to create a specialized engine?
The more bilingual data, the better the quality. For broad domains such as news, millions of bilingual sentences will be required. However, if the domain is narrow, such as technical support documents for certain products, then even a small set of sentences of 50,000, noticeably improves the quality.
The amount of data required will depend on how broad or narrow the domain is you are specializing the engine into.
Language is messy. Ask any person who has ever had to learn a second language and they will tell you that the most difficult aspect isn’t learning all the rules, but understanding the exceptions to the rules — the real-world application of the language.
When it comes to protecting classified data, blackout redaction has been in use for at least a century. While it is not the only acceptable form of data sanitization, it is historically the oldest and most commonly utilized by eDiscovery firms. This is despite the fact there are more modern and easy-to-use alternatives that save time and reduce errors. The two main data sanitization alternatives that meet legal requirements include anonymization and pseudonymization.
Machine Translation users care about quality and performance. Based on our own observations and the feedback we’ve received; the quality of our Neural MT is impressive. Evaluating performance is a stickier subject, but we’d like to dig our hands in and present our innovations and achievements and how it benefits NMT users.
By performance we mostly mean the manner in which a system performs in terms of speed and efficiency in varying production environments. It is important to note that performance and quality in Neural MT are tightly connected: it is easy to accelerate a given model compromising on the quality. Therefore, when evaluating performance improvement, we always check that quality remains very close to optimal quality.
Since switching to NMT at the end of 2016, we’ve invested our R&D efforts into optimizing our engines to be more efficient, while maintaining and even improving translation accuracy. Our latest, 2nd generation NMT engines, available in our latest release of SYSTRAN Pure Neural® Server, implements several technical optimizations that make the translation faster and more efficient.
New model architecture
The first generation of neural translation engines was based on recurrent neural networks (RNN). This architecture requires the source text to be encoded sequentially, word by word, before generating the translation.
Since 2016, there has been a sharp increase in open source machine translation projects based on neural networks or Neural Machine Translation (NMT) led by companies such as Google, Facebook and SYSTRAN. Why have machine translation and NMT-related innovations become the new Holy Grail for tech companies? And does the future of these companies rely on machine translation?
Never before has a technological field undergone so much disruption in such a short time. Invented in the 1960s, machine translation was first based on grammatical and syntactical rules until 2007. Statistical modelling (known as statistical translation or SMT), which matured particularly due to the abundance of data, then took over. Although statistical translation was introduced by IBM in the 1990s, it took 15 years for the technology to reach mass adoption. Neural Machine Translation on the other hand, only took two years to be widely adopted by the industry after being introduced by academia in 2014, showing the acceleration of innovation in this field. Machine translation is currently experiencing a golden age of technology.
From Big Data to Good Data
Not only have these successive waves of technology differed in their pace of development and adoption, but their key strengths or “core values” have also changed. In rule-based translation, value was brought by code and accumulated linguistic resources. For statistical models, the amount of data was paramount. The more data you had, the better the quality of your translation and your evaluation via the BLEU score (Bilingual Evaluation Understudy, the most widely used algorithm measuring machine translation quality). Now, the move to Machine translation based on neural networks and Deep Learning is well underway and has brought about major changes. The engines are trained to learn language as a child does, progressing step by step. The challenge is not only to process exponential data (Big Data) but more importantly to feed the engines the most qualitative data possible. Hence the interest in “Good data.”
As of January 3rd 2018, companies in the financial industry operating in Europe are required by law to fully comply with the new MiFID II regulation. A good portion of the new rules requires translating various documents for a multilingual audience.
With that in mind, here are four reasons why you need neural machine translation to help you lead your compliance project to success:
1 – Effortlessly translate detailed information on tons of transactions
2 – Easily provide investors with multilingual research reports and articles
3 – Produce E-Learning and other company material to educate employees across the EU on complying with these new regulations
4 – Translate contracts and other official investment documents
This is the final post for the 2017 year, a guest post by Jean Senellart who has been a serious MT practitioner for around 40 years, with deep expertise in all the technology paradigms that have been used to do machine translation. SYSTRAN has recently been running tests building MT systems with different datasets and parameters to evaluate how data and parameter variation affect MT output quality. As Jean said:
” We are continuously feeding data to a collection of models with different parameters – and at each iteration, we change the parameters. We have systems that are being evaluated in this setup for about 2 months and we see that they continue to learn.”
This is more of a vision statement about the future evolution of this (MT) technology, where they continue to learn and improve, rather than a direct reporting of experimental results, and I think is a fitting way to end the year in this blog.
It is very clear to most of us that deep learning based approaches are the way forward for continued MT technology evolution. However, skill with this technology will come with experimentation and understanding of data quality and control parameters. Babies learn by exploration and experimentation, and maybe we need to approach our continued learning, in the same way, learning from purposeful play. Is this not the way that intelligence evolves? Many experts say that AI is going to be driving learning and evolution in business practices in almost every sphere of business.
SYSTRAN’s solution are used every day by various types of companies across many industries to get the most accurate and secure automatic translations on any type of content – from sensitive documents to websites to mobile apps and much more. We’d like to focus today on how one of our clients – Alvarez & Marsal, a consultancy firm- uses SYSTRAN’s platform to manage eDiscovery projects with the highest efficiency and accuracy.
The processes and tools used in eDiscovery scenarios are, most of the time, quite complex given the large volumes of electronic data produced. Unlike hard-copy evidence, e-documents are a lot more dynamic and contain various metadata that demand the highest translation quality in order to eliminate any claims of spoliation at any time in a litigation case.
Phil Beckett, the firm’s Managing Director, who has recently been named ‘Investigation Digital Forensic Expert of the year’ by Who’s Who Legal is talking to us about how SYSTRAN’s solutions plug into their internal processes to manage their projects end to end.
Phil Beckett – Managing Director at Alvarez & Marsal