The original generative task

The original generative task

Translation is also the original generative task, and it started a half decade before ChatGPT 3.5 made the rest of the world notice.

The "T" in "GPT" stands for "Generative Pretrained Transformer". What is a Transformer?

The Transformer

The Transformer model architecture was invented inside Google for Google Translate in 2017.

The Transformer uses an "attention mechanism" to process sequences of text, making it particularly effective for translation and other language tasks.

It was quickly launched in production and adopted by all the best translation systems.

The shift to post-editing

Translation was also the first task to shift, from humans writing from scratch to post-editing — manually checking and fixing AI translations.

This had started in the early 2000s, more than a decade before the Transformer, but it really accelerated in the 2010s with the amazing progress in generation.

However, better generation alone failed to accelerate translation, because humans still had to manually check every single word, even if they only fixed a small percentage.

Verification

So really accelerating translation required verificationAI to check AI translations.

Naturally then, translation was also the first task for which AI verification was researched, rolled out and adopted.


Join the mission
Are you interested in joining the mission to accelerate human-quality translation?
Browse jobs at ModelFront