How does ModelFront catch bad translations?
Our core technology is built upon years of continued open scientific contributions by notable researchers in machine translation in industry and academia, as well as great advances in deep learning models, data and infrastructure.
In research terms, we've built "massively multilingual blackbox deep learning models for quality estimation, quality evaluation and filtering", and productionized it to make it accessible and useful to more players.
We love understanding what causes bad translations at the system and process level, as well as the challenges fundamentally inherent to natural human language.
The causes of translations errors are constantly evolving along with the systems, processes, languages and use cases, and we track them as they evolve.
Unlike BLEU, you can use ModelFront for new content or languages with no human reference translations, or on the human reference translations themselves.
We give you risk predictions are engine-independent and customizable, and smart about valid differences in translations, like synonyms.
We use large-scale monolingual open text in hundreds of languages, curated parallel data and hand-labelled data, and we continually invest in growing and targetting those.
We actively develop technology focused on covering the known issues of the major types of translations outputs, like machine translation, crawled and aligned corpora, back-translation and human translation.
The technology and use cases in our space will continue to evolve quickly, and our approach will too.