The vodka is good but the meat is lousy

The vodka is good but the meat is lousy

As the (very) weird and (less) wonderful 2020-2021 academic year draws to its close, many language teachers will be thinking about how best to use the whole online experience to enrich future teaching and learning. A timely moment then to reflect further upon online translation tools and the practical and pedagogical implications of their vertiginous development. As discussed in a previous post it’s very tempting to ignore these tools because, at a very basic level, they seem to threaten language learning and teaching. Why bother to learn a foreign language if one day machine translation will do a better job? In fact, why bother seems to have become somewhat of a leitmotif with me over the last few months (see last but one post). I blame online overload.

RG Jones’s post entitled Advanced tech:  no need to learn a language? is reassuring :

“Real language use is not primarily transactional, but social, more about building relationships than achieving a goal”.

Oof. This is easy to forget sometimes.

But this is not the only reason. My head-in-the-sand attitude concerning online tools is also partly due to the fact that I don’t understand how they function. However, as an increasingly online, writing instructor, this no longer seems good enough. And so I have turned to friend and colleague Lily Robert-Foley, assistant lecturer at Paul Valéry University in Montpellier and experimental translation specialist, to explain  the history and concepts of machine translation and how she thinks this will change the face of translation and teaching in the future.

Thank you Lily

Lily Robert-Foley on machine translation:

“The vodka is good but the meat is lousy”

Or so allegedly went one of the first attempts at machine translation: Russia’s translation of the English proverb “the spirit is willing but the flesh is weak” (Matthew 26:41) during the Cold War. We have come a long way since early militaristic attempts at machine translation. However, this kind of overly literal “calque” (faulty word-for-word translations) where machines translate words without context, translating wrong homophones of words and perpetually falling prey to absurd collocational missteps, remains the dominant view of what machines do when they translate. But it’s time for everyone to adjust their expectations.

Have you ever tried that fun game where you take bit of language, a sentence or a phrase, and pass it back and forth in Google translate until you end up with some completely different, hilarious result? Well, try it again, and this time use DeepL, the leader in the field of machine translation (although Google translate has also gotten a lot better as of late, if you haven’t noticed). If you try translating “the spirit is willing, but the flesh is weak” into Russian using DeepL you get: “дух хочет, но плоть слаба.” Ok, yeah, I should probably translate that back into English, since, if you’re like me, you don’t read Russian. The result? Yup, you guessed it: “the spirit is willing, but the flesh is weak.”

But that’s too easy. Let’s see what happens when we pass the biblical quip through all of DeepL’s languages: 

Chinese: 心有余而力不足

French: La volonté est là, mais pas la force (idiome, tiré de Confucius Analects).

Dutch: De wil is er, maar niet de kracht

German: Der Wille ist da, aber nicht die Kraft

Italian: La volontà c’è, ma non la forza

Japanese: 意志はあるが、力はない。

Polish: Mam wolę, ale nie moc.

Portuguese (BR): Eu tenho a vontade, mas não o poder.

Spanish: Tengo la voluntad, pero no el poder.

Russian: У меня есть воля, но не сила.

If you translate the last one in Russian back into English, it gives you: “I have the will, but not the power.” Depending on your attachment to literal over interpretative or adaptation-based translation strategies, this could be read as a lovely intralingual or homolingual (‘translation’ within one language) translation of “the spirit is willing, but the flesh is weak.”

However, it’s not exactly the same as the original, is it? Let’s look a little more deeply into what happened. Since I don’t speak 8 out of the 11 languages that DeepL speaks, I did not try to make use of their handy drop-down menus. As any good translator knows, there is never one possible translation—and not even one possible good translation—for anything. So DeepL gives you the mind of a translator in the form of dropdown menus where you can pick from a range of options for how to translate a word, ranging from most frequently used given the surrounding context, to least frequently used. And what’s more is that DeepL will modify the rest of the sentence to suit, depending on which option you choose from menu.

So, if we take a closer look into the series of above translations and analyze how we got from “the spirit is willing, but the flesh is weak” to the somewhat different “I have the will but not the power”, it’s not actually DeepL’s doing, it’s mine, resulting from the fact that I do not read Chinese. DeepL recognized in the proverbial form and lexicon present in the phrase “the spirit is willing but the flesh is weak”, the equivalent expression in Chinese, and translated using what Vinay and Darbelnet (two old white dudes who wrote translation theory in France in the 1950’s) would identify as the translational procedure of “equivalence”: translating an expression or an idiom by its equivalent in a foreign language, rather than translating word for word. I found this out when DeepL very helpfully filled me in when I asked it to translate the Chinese into French, which I do speak, and it gave me, “La volonté est là, mais pas la force (idiome, tiré de Confucius Analects)”, or in English: “The will is there, but not the strength (idiom taken from the Analects of Confucius)”. We see here, thanks to DeepL’s commentary citing its translation, that the loss of the “flesh” of the original, and also the “modulation” (another translation strategy from Vinay Darblenet) from an affirmative clause “the flesh is weak” to a negative one “but not the power”, is DeepL’s very good translation of the Chinese idiom, which was indeed, a correct, translation of “the spirit is willing but the flesh is weak”, with no trace of vodka or meat anywhere, unfortunately for those wishing to dine out in Russian tea rooms on anecdotes of machine translation software goof-ups.

In other words, the translational “error” we see in the above rotating translating exercise is human, not machine. And this is because machine translation functions in a similar way to the way a human learner of language might function, except that—at least in the case of 8 out of the 11 languages it translates—it has been doing it a lot longer than I have.

DeepL uses stochastic algorithms[1] to translate the texts that are plugged into it: statistical equations drawing from massive databases, that are able to evaluate at light speed the frequency of a given collocation, the likelihood that a group of words might find themselves next to each other and in what order.

The use of statistical algorithms for machine translation appeared toward the start of the 1990’s (for a more detailed history, see Christine Mitchell ‘Whether Something Works’), and was a radical game changer for the field, and for what machine translation was capable of. Earlier attempts at machine translation were rule-based, and depended upon the idea of a deep grammatical structure common to all language, a kind of Ur-code that would transcend language (not to mention cultural differences). It is embodied in the famous quote from early machine translation researcher Warren Weaver in 1955: “When I look at an article in Russian, I say: ‘This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.’” (quote found in Rita Raley’s ‘Machine Translation and Global English’).

This is problematic on many levels and we will not delve into all them here (for a terrific breakdown of this, see Avery Slater’s ‘Crypto-monolingualism’)—not least of which is the hegemonic linguistic domination apparent in the place of English. Indeed, code itself is written in English, requiring any programmer to have at least a basic functioning level in the language. It also means that hundreds of years from now (if humans are still around), traces of an archaic English will remain embedded in computer code all across the virtual sphere.

But Warren Weaver’s idea that a base code (in English) existed deep in the primordial stew of language, if only we could frack it out, was not just problematic, it was wrong. Statistical models of machine language learning and machine translation turned out to be far more effective because in fact, there is no Ur-code, no deep primordial expression that reduces the problem of human languages, and of human difference, to a simple magic formula that the military can use to avoid having to pay translators. Language is about experience, about getting emerged in the flow of a linguistic community, about hearing and reading others, repeating, responding to them, and sharing, creating expressions out of that flow. Furthermore, this begs the question of whether a rule-based model of language actually works for human learners, which may come as interesting news to language teachers tired of trying different approaches to teaching grammar in the classroom and seeing little to no results.

Nowadays, debates about how machines learn language no longer concern themselves with rule-based codes at all, but rather focus on the distinction between purely statistical models and those which deploy neural networks, which are able to plot a given word within a vector (a vast imaginary space) and align uses of different words according to their proximity in this space. For example, the word “cat” and the word “dog”, because of how they are used, would be plotted very close to one another on a vector, and therefore, they could each benefit from the result gleaned from algorithms treating the data of their occurrences, thus increasing accuracy and efficiency, slowing down processing times and taking up less memory.

DeepL uses neural networks, but also, as is apparent from its name, “Deep Learning”, which is the ability of different algorithms treating different sets of data to combine and cross-index against one another to improve the accuracy and efficiency of their results. We are rapidly approaching (or have already passed) the limits of my technical savvy here, but for example, in facial recognition software, an algorithm that reads for skin color might be made to dialogue with an algorithm that reads for the measurements for the curve of a nose or a chin, and thus these algorithms, acting together across more than one layer (thus “deep” learning) of code, will be better able to recognize a face than a single algorithm learning on its own.

The arrival of what has been called “algorithmic culture” can be terrifying for many reasons. Not only can your face (or the face of a political activist, or other dissident, or simply the face of someone who is different in a way that dominant culture or the government doesn’t understand) be picked out from a crowd, but machines are now controlling our world in ways we don’t even understand. For a full account of this, watch Kevin Slavin’s TED talk on how algorithms shape our world, where he walks math dummies like myself through several instances in which algorithms caused incredible real world happenings that humans still don’t understand, and also muses mind-blowingly on the future possible impacts on culture.  

Likewise, machine translation is a terrifying prospect for many, not least of all for translators themselves. Who is going to hire a translator when they can go online and plug a text into DeepL for free and come out with perfect, or near perfect translation? Indeed, DeepL translations can sometimes be even better than human translations. To give a purely anecdotal example based on personal experience, when a Hans-Robert Jauss quote on reception theory was plugged into DeepL, a translation appeared that also included an additional sentence defining reception theory, that it had picked up from its databases… and it made the citation much clearer! Of course, one might argue that humans are still needed for post-edition, reviewing and rereading, which is true, and will be for many years, however reader expectations are also evolving at the same time as machine translation is improving. Readers are becoming more and more accustomed to wading through a bit of linguistic gunk when they click “translate this page” at the top of the search engine, to get the “gist” of what is written (this has been called “gist translation” by Michael Cronin)

But, many readers will object, what about literature, this most human of language pursuits? What about poetry? What about rhyme and meter and plays on words? What about beauty? What about humor? My response to that would be that to say a machine could never translate poetry is a rather short-cited view, not only of the machine, but of poetry itself. Indeed, computers have been writing poetry since 1959 (with Theo Lutz’s Stocastische Texte), and if you have any doubts as to the incredible range of machine or cyborg generated poetry, take an afternoon and have a look at the Electronic Literature Directory. And as to whether machines can translate humor, I am not sure anyone can, but they certainly can translate with humor, as is evidenced by the very first example of machine translation mentioned in this writing.

So, is that it for humans then? Are we doomed to become redundant, obsolete? Not so fast. People often tend to confuse change with destruction, with finality, which probably does more to create fear than to answer questions. Indeed, similar calls were made for art with the advent of photography, and guess what? Art is still here, and humans are still making it, with and without the use of cameras and computers. After all, humans and machines are not the same, so how could we replace each other? Attempts to model computers on human brains were given up many years ago (as you can read in the introduction to the book on Deep Learning), as the human brain is far too complex. And now, if we follow Kevin Slavin’s argument, so are machine’s.   

But photography did change art, radically, and forever. But the word art and its meaning, like the word translation, and its meaning, is not fossilized in time like a fly in amber. It grows, and changes, emerged in the flow of evolving linguistic, and technological communities. Indeed, the word ‘translation’ only appeared in the 14th century in English, and before the advent of the printing press, texts were passed back and forth between languages fluidly, copied by hand and amended, changed, rewritten, interrupted, fragmented, rewoven, destroyed and recreated. The supposedly “mindless” approach to fidelity in translation that the machine attempts and perhaps succeeds at imitating reminds us that fidelity as a translational imperative is a relatively recent phenomenon, and a European one at that. In fact, as Shalamee Palekar and others remind English speaking readers, ‘anuvad’ (speaking after or following) and ‘rupantar’ (change in form), the Hindi and Marathi terms for translation, do not imply a need for fidelity to the original. Indeed, an over-exaggerated veneration of the original has been aligned with ideologies of colonization and private property (see Rachel Galvin’s ‘Poetry Is Theft’ for one excellent account of this). And as a translator, I personally take exception to the commonly held belief that an original is always preferable to a translation. I much prefer Anthony Pym’s, ‘to translate is to improve”.

Perhaps then, as often happens when humans and machines collaborate in a historical and technological transition, the sense of what it means to translate is changing along with the roles that both humans and machines play in it. Machines need humans to invent them and to make use of them, to play with them. And humans need machines not just to make things easier for themselves but to grow and change, experiment and create along with them. I personally believe that machine translation taking over the chop wood carry water aspect of translation can be seen as a liberation, like any ending perhaps. It calls for us, non-machine beings, to be freer in our approaches to text, to be more fluid and flexible, and more creative. 

Lastly, it calls for renewed approaches to translation pedagogy in language learning, writing and translation classrooms. While many teachers of language, writing and translation do combine translation and creation, as well as context and cultural concerns, in the classroom, many—specifically where I work in France—are stuck in the idea of translation solely as an abstract grammar teaching exercise. While I do not want to negate the benefits this can have as a way to reassure and clarify the functioning of languages in comparison for students, it remains lamentably far from making use of the full potential of translation as a pedagogical tool, and also fails to respond to students’ needs to understand the landscape of translation as it exists today in practice and in theory.

Lily Robert-Foley

With flowers from fig_tart


[1] Alison’s note: stochastic = use of randomness. For more detail check this out:https://machinelearningmastery.com/stochastic-in-machine-learning/

Leave a Reply

Your email address will not be published. Required fields are marked *