Why We Need Human Translators
If you type “translators are” into Google…
…you’ll notice several autocomplete suggestions, one of which reads: “translators are a waste of space”. Ouch! Despite the not-so-pleasant sentiment, I feel this search perfectly encapsulates how translators are often perceived today: “why do we need human translators when we’ve got Google Translate?”.
Alas, many people seem to believe that translators are already a thing of the past and that MT (Machine Translation) systems can already produce the same quality as professional translators.
Fortunately for me and my fellow translators, that’s not the case and MT systems have some serious issues.
But the big one is comprehension
MT systems rely primarily on syntax and grammar. They search for patterns and focus on rules rather than understanding meaning. They therefore approach translation from a completely different angle to human translators, who have a much deeper understanding of reality and the logic that underpins a given situation. MT is restricted to the words alone. Where it uses position and form to decode a sentence, we use our knowledge and experience to decipher meaning. MT systems therefore lack some of the fundamental building blocks required to grasp and reproduce natural, accurate language.
Referents are a great way to highlight exactly what I’m talking about. Humans are pretty good at understanding referents based on context. For instance, have a look at the following sentence: “The laptop wouldn’t fit in the bag because it was too big.” Now, what would you say if I asked you: what was too big? There’s no trick.
I’m sure you would say: “well, it was the laptop that was too big but that’s obvious!” And for humans, it is. We know that bags usually contain things and that laptops don’t. We understand the logic of the situation and that larger things can’t fit inside smaller ones. For us, it’s obvious what the ‘it’ refers to.
But it wouldn’t be obvious to the MT engine
MT systems don’t have this wider understanding of objects and concepts. Consequently, they are often tripped up by referents in languages where ‘it’ could be rendered differently, depending on the gender of the noun that it refers to. For example, ‘laptop’ is masculine in German (der Laptop) and ‘bag’ is feminine (die Tasche). You would therefore use different referents (‘er’ for masculine nouns and ‘sie’ for feminine ones).
Mistakes in referencing the wrong noun or object can naturally lead to confusion but it’s not hard to imagine situations in which they could also turn out to be life-threatening. Safety instructions, medical procedures, product warnings, etc.
Specialist MT systems perform slightly better as they are trained in a highly specialised subset of language, e.g., medical. This narrows the context considerably and helps boost the likelihood that the system will identify and use the correct referent. Even so, they are still a long way from being able to produce the level of quality required without some form of human involvement (e.g., post-editing).
Ultimately, all MT systems currently lack the human experience required for real understanding. AI may solve this issue in the future and recent developments have shown real potential but even the much-lauded GPT-3 still struggles with referents, for instance.
It’s going to take much more sophisticated AI algorithms and incredibly complex code to come close to real human understanding. And that’s MT’s big shortcoming: the shallowness of its comprehension.
Want to know more about how Textera can help with your translation needs? Get in touch with us here.
By Leighton Jacobs, Translation and Language Services Professional (LinkedIn)