Impressive: “Conference” Interpreting in Action at the European Parliament

Strasbourg

The technical orchestration of twenty-three languages performed by Members of the European Parliament and the cadre of Simultaneous Interpreters assigned to generate spontaneous comprehension is nearly seamless. I sat on the floor during much of the Foreign Affairs Committee Meetings last week, observing communication dynamics concerning the use of languages and thus, the use of the interpretation system. Twenty-one of the 23 official EU languages (except Irish and Maltese) were interpreted, and two non-EU languages as well: the President of the Republic of Tajikstan presumedly spoke Tajik (he brought his own interpreter), and at a particular moment Mr. Konstantin Kosachev from the Russian Duma spoke in Russian (he came with a team of interpreters).
From my seat in the visitor’s section, I watched, listened, collected ethnographic data, and thought about what was being generated: both in terms of the microsocial interactions among Members and as a product for mass-media consumption (the meeting was web-streamed ~ although the link for it is down, so maybe this feature is not currently functional? Darn.) For instance, I noted what languages were used, and (roughly) for how long. Here’s the run-down, from least to most used during the 510 minutes I was present. All measurements and calculations are approximate.

    0 minutes each: Bulgarian, Czech, Finnish, Italian, Latvian, Portuguese, and Slovak
    2 minutes each: Danish, Estonian, Romanian, Swedish (.4 % each)
    3.5 minutes each: Hungarian, Lithuanian, Slovenian (.7 % each)
    10 minutes each: Greek, Spanish (2 % each)
    16 minutes: Russian (3.1 %)
    17 minutes each: German, Polish (3.3 % each)
    19 minutes: French (3.7%)
    20 minutes: Tajik (3.9%)
    30 minutes: Nederlands (5.8%)
    356 minutes: English 70%

Of the total time, 7% was given to non-EU languages (Russian & Tajik), while 23% was given to official EU languages other than English. If the non-EU languages are removed, then the ratio of English to all other official EU languages in this meeting was 3:1 (75%-25%).
All of the interpreters were in their booths ready to go at the scheduled start times, and none of them bolted at the end – even when the meeting went over the announced end time. When there were unplanned recesses or the meeting ended early, there was likewise no rush to get out of the booth. This gave a sense of the interpreters liking their work, their colleagues, and even the atmosphere. I saw a few instances of peer support during retour, which reminded me of working as a team in signed language interpretation where visual confirmation of accuracy is vital. Since concerns have been expressed to me about the process of retour, I was glad to witness it in action on a few occasions. (Retour refers to when a language not widely-known is spoken by a Member, requiring an interpreter who generally interprets only into that language to render an interpretation from it into one of the more commonly-known languages – such as English, French, German, or (sometimes, so I’ve heard) Polish – in order for the rest of the interpreters to re-interpret into their own respective languages.) Retour is definitely complicated but on the occasions of its use in the AFET meetings I did not notice any indication of distinction in the communication dynamics of the group as a whole.
Most of the labor of ensuring that the right languages get to the right users of those languages seems to be hard-wired into the technical system. Everyone has access to their own set of headphones. Whether or not they use them, and when, is a matter of some consequence, but I will say that the dang things hurt. No doubt one builds up earlobe endurance over time, and the skull adjusts to resisting their pressure, but I was happy to have chances not to wear them! Maybe my sensitivities are more tender than others? There are 27 channels: when a Member begins in a language you don’t know, you simply tune your headphones to the channel of the language you prefer to listen to – Members have preferences and reasons for selecting this or that language of interpretation, depending on their own language repertoire, levels of fluency, and experience with which interpretation booths tend to produce performances that meet their criteria (all of which is for another post, one of these days).
If I was mainly concerned with the contents of what is being said, for instance, I would simply leave my headphones tuned to channel 2, which is the English booth. But I whizzed around quite a bit seeking to accurately identify which language was being spoken on the floor (there are few that I recognize by sound, and even those I have a feeling for can be deceptive: sometimes, for instance, Dutch sounds like German and vice-versa). The only thing that was managed live by the hardware technicians (as far as I can guess) was syncing the interpretations of Tajik and Russian from booth/channel 23 to channel 2. (I am sure they are also working with recording and webstreaming but these activities do not affect interactions on the floor among Members in real time.)
I did notice some behaviors characteristic of ignoring the interpreters, such as a couple of guys standing directly in front of the English booth as if there was no need for the interpreters within to be able to see the action on the floor. I suppose it is easy to assume that since the information is mostly auditory (piped through headphones), there is no need for supplementary perceptual information. Likewise, some interpreters-in-training got a bit carried away in one booth, pointing and laughing at something or someone on the floor. If you didn’t happen to look up you’d never be aware, but if you did, that was a bit distracting. The glass is darkened but not opaque: you can see which particular interpreter is working by the little red light on their active microphone.
My thoughts as I considered the phenomenological experience produced by this system – especially in terms of broadcasting to a wider audience not physically present in the room – is that it has some parallels with what is described in the literature of online (computer-mediated) communication as cues filtered out. The summary I linked to mentions five different ways researchers have oriented to the different ensemble of communication cues in a technologically-mediated environment compared with face-to-face interactions in a shared physical environment. I think it is not too much of a stretch to make some inferences from that literature to the audiological world being created by all the separate language channels.
Nearly erasing the reality of many different languages being spoken in the room, the sensation of there being only one language present is the phenomenological reality being created and conveyed by the sophisticated merger of human simultaneous interpretation and electronic technical machinery. What goes out to the world, one could say, is stripped of multilingual character and given a monolingual essence. Even in the meeting room, if one simply keeps those earphones firmly in place, the simulation of a monolingual conversation can be produced. A person might remain aware there are more languages being spoken, but (it seems) one tends only to pay attention to the language(s) that one knows – or wants to know.
If the overall system worked less well, then the differences of the different languages would demand recognition. This, I am pretty sure, would ultimately be more democratic, more transparent, more equitable. In fact, there may be a means of using simultaneous interpretation proactively to help create the very relationship between the European Parliament and the citizens of Europe that EU proponents desire.

One thought on “Impressive: “Conference” Interpreting in Action at the European Parliament”

  1. No surprise – the article on CMC linked above is probably the same one we read in Leda’s graduate course on the Social Impact of Information Technology (where I found the inspiration for this blog).
    Walther, J. B., & Parks, M. R. (2002). Cues filtered out, cues filtered in: Computer-mediated communication and relationships. In I. M. L. Knapp & J. A. Daly (Eds.), Handbook of interpersonal communication__ (3rd ed., pp. 529-563). Thousand Oaks, CA: Sage.

Leave a Reply

Your email address will not be published.