Monthly Archives: octubre, 2011

Busca las diferencias

octubre 27th, 2011 Posted by blog, CAT systems, fuzzy match, translation memories No Comment yet

¿En qué se parecen dos oraciones? Esta cuestión, engañosamente sencilla, es importante para los traductores autónomos. La mayoría de las agencias de traducción exigen a empleados y autónomos que usen sistemas de traducción asistida con memorias de traducción. Estos sistemas clasifican una a una las oraciones del texto por traducir como nuevas, si no se parecen a ninguna de las que incluye la memoria de traducción de referencia, o de coincidencia 

parcial, si son razonablemente similares a alguna de las traducciones que sí contiene.

Por las palabras de las oraciones de coincidencia parcial se suele pagar la mitad de lo que se abona por una palabra de una oración completamente nueva. En consecuencia, el algoritmo que establece si dos oraciones se parecen lo suficiente o son básicamente distintas tiene un efecto directo sobre el importe que finalmente percibe el traductor. Por ejemplo, tras introducir en la memoria de traducción el primer segmento de la tabla siguiente, para el siguiente segmento obtenemos un porcentaje de coincidencia bastante alto y, por tanto, se pagará ala mitad:

Segmento

% de coincidencia

Are you sure you want to delete the file?

0%

Are you sure you want to overwrite the file?

90%

Curiosamente, los traductores suelen confiar ciegamente en el ordenador. No suelen ser conscientes de que las diferencias de cálculo tienen un efecto directo sobre su remuneración. Tampoco parecen darse cuenta de que programas como SDL Trados, Geoworkz o Idiom tienen sistemas muy diferentes de tratar esta cuestión. Así pues, aplicar la misma tarifa en todos ellos no refleja en absoluto el trabajo y tiempo que requiere cada traducción.

Segmento

Trados

Wordfast

Are you sure you want to delete the file?

0%

0%

Are you sure you want to overwrite the file?

90%

79%

Programar un algoritmo que determine en qué letras o palabras se diferencian dos oraciones resulta sencillo pero ¿basta con ese análisis para justificar la reducción de la tarifa por el ahorro de tiempo que supone disponer de una oración traducida similar de algún modo? Bastan unos ejemplos para comprender que los factores son muchos, como por ejemplo el orden de las palabras o la presencia de etiquetas.

Como suele ocurrir al intentar crear otros algoritmos, el propio concepto que intentamos emular no está definido con nitidez: la ayuda que supone, para traducir una oración, tener otra similar traducida. Una determinada oración muy útil para algunos traductores no resulta de tanta ayuda para otros tantos. En definitiva, resulta arduo conseguir para el algoritmo una objetividad difícil de encontrar en la realidad. A medida que se profundiza en este problema, aparecen muchos otros factores que tal vez hay que considerar. Por ejemplo, convendría tener en cuenta las diferencias en el orden de las palabras y el uso de mayúsculas y minúsculas, así como las variaciones en las etiquetas del texto, etc. ¿No habría que tener en cuenta —un ejemplo adicional— si las palabras que marcan la diferencia son palabras plenas o no?

Aunque casi siempre es posible encontrar para todo factor una definición aceptablemente rigurosa, al final hay que combinar los disímiles factores para reducirlos a un solo porcentaje que pretende aprehender la similitud general de dos cadenas. En breve, resulta inevitable ponderar los factores. Aquí es donde la objetividad es discutible. Por ejemplo, ¿realmente conviene dejar de lado otras oraciones de la memoria notan similares estructuralmente pero con la terminología necesaria para traducir rápidamente la oración de turno?

Todas estas preguntas se adentran cada vez más en la naturaleza íntima de la traducción y los procesos cognitivos que los traductores experimentan al traducir un texto. La mayor facilidad para traducir una oración se asume correlacionada con la cantidad y dificultad de las operaciones cognitivas necesarias para ello.

Del mismo modo se entiende en las memorias de traducción, donde el menor valor se relaciona con la mayor facilidad derivada de tener algo parecido en la memoria (de traducción). Pero podría ocurrir que las memorias humana y automática no funcionen del mismo modo. Deberíamos, pues, profundizar en las diferencias para comprender mejor sus similitudes y su papel en el proceso de traducción.

Así que este es un tema candente por investigar, pues no solo sirve para profundizar en el conocimiento del modo de pensar humano. Aclarar en qué se parecen dos oraciones con mayor precisión podría tener, además, un efecto práctico inmediato en los bolsillos de los traductores, al informar la remuneración justa por su trabajo.

Una aproximación al problema es partir de este supuesto: que el ordenador evalúe con total precisión la relación entre dos oraciones equivale a poder traducirlas correctamente. De ser así, podría determinar con precisión las diferencias en las operaciones necesarias al traducir un segmento textual partiendo de cero (si eso existe) y al traducir contando con una base almacenada en la memoria. Claro que, en este caso, la respuesta a esta pregunta sería mucho menos relevante.

Mientras tanto, ya modo de punto de partida, debería desarrollarse un estándar que armonice loscriterios y un mecanismo de garantía de que se aplican de manera uniforme. Los experimentos para validar las fórmulas como, por ejemplo, evaluar el tiempo que tarda un grupo de traductores en traducir un texto con memorias de traducción más o menos alimentadas podrían ser realmente valiosos. Literalmente, en este caso.

 

by J. Perea

You can quote me on this one (2/4)

octubre 19th, 2011 Posted by author-date style, blog, citation, identification code, reference list, The sorcerer's apprentice No Comment yet

Would be so nice if you’d ibidem-ize my work

Now that you know how to quote the right way (not yet? Check the first post of this series), here you will learn the most important points on title identifiers such as ISBN, ISSN and DOI, which you should always quote as far as possible at the end of your bibliographical entry, to give your reader several options to locate each reference. If you don’t know what those acronyms are all about yet, don’t worry, you will after reading this post. I will also introduce you to the author-date system, the referencing style used by academics all over the world. Let’s start with the identification codes, i.e. with codes to identify sources such as books, serial and digital publications.

What do these acronyms stand for?

ISBN

The most common title identifier stands for International Standard Book Number; you will find it in each published book. By the ISBN number, book handling becomes easier for everybody implied in book sales: the publishing houses, the bookshops, the libraries and… for you as a reader, an author, and a scientific bibliographer.

eISBN

The e stands for electronic (you guessed it, yes) and it is used for the digital version of a book; if a book was launched only in digital format, the ISBN automatically is the eISBN—it’s the same number. If it was first published as a print version, you may have to look for an extra eISBN number for the digital copy.
ISSN

The International Standard Serial Number is used for serial resources such as magazines, newspapers or yearbooks.

eISSN

It works like the eISBN but it is not for books but for periodicals. Yes, you guessed it again.
DOI

The Digital Object Identifier is a code for electronic objects. Now you may want to argue that an ISBN is a digital object identifier as well. And you are right, but while ISBN identifies any monography (remember?) in any media, DOI only identifies online publications, no matter their extent, that is, with a DOI you can identify parts of the whole, such as chapters, graphs, or tables. This object identification is necessary because a URL does not mark the object itself but its location in the web. A DOI is unchangeable and is not linked to a determined storage place; it is a permanent URL which means it does not only serve the purpose of object identification but also takes you to its location as well. What’s really interesting about DOI is its address function, and also the online identification for publications.

The author-date system

Now that you are familiar to the most common identification codes for publications, let us focus on the author-date system, also known as Harvard style. If you want to become a part on the international scientific community, you just need it because it is commonly accepted as the citation style of choice. Harvard (as it is also called by insiders J) covers both referencing within the text (including the author and year of publication) and organizing your bibliographical data, usually at the end of your work. As we have dealt with the reference list in further detail in the first post of this series, we will just concentrate on the ways of referencing in the text at this point. There are basically two types of referencing in your text:

  1. Citation

When you think that the author’s statement is really what you want to say, you can take the original and copy it down word by word. This citation could look like this:

“Through increasingly accurate description and negotiation of observations from different sources of data, we can get closer, perhaps not to an ‘objective’ result, but to shared replicable experiences and results” (Hansen 2003: 40).

Or like this:

Hansen (2003:40) states that “through increasingly accurate description and negotiation of observations from different sources of data, we can get closer, perhaps not to an ‘objective’ result, but to shared replicable experiences and results.”

In most cases, however, you should apply the way described in 2 below.

  1. Paraphrase or summary

If you want to express something another person said in your own words, you could summarize or paraphrase it, for example like this:

Hansen (2003:40) states that by triangulating different methods it could be possible to gain some intersubjective insights.

Of course, you have to offer the complete data for this reference in the bibliography, at the end of your text.

You can take PETRA’s style sheet as an example for a reference list style, which includes some advice on the handling of the identification codes introduced above as well.

eReferences

First post of this series (2011). You can quote me on this one – please do.

Monash University Library (2006). Harvard (author-date) style examples.

PETRA’s style sheet (2011)

Swedish University of Agricultural Sciences Libraries (2009). ISBN, ISSN, DOI and URN:NBN.

Williams College Libraries (2009). Chicago Manual of Style.

 

by P. Klimant

‘Translation’ without ‘trans-’, ‘inter-’, ‘cross-’, or ‘over-’

octubre 12th, 2011 Posted by blog, concepts, transfer, under consideration No Comment yet
One of the most interesting developments within Translation Studies in recent years is the emerging focus on the individual human translator. This development has already been described, among others by Pym (2009) and Chesterman (2009), and is also touched on by Muñoz (2010). As Chesterman points out, the development is actually following three separate trajectories: one in translation sociology, one in translation history, and one in translation process research (2009:13). In other words, the renewed focus on the translator is a result of understandings being developed within quite separate research groups, with quite separate research agendas, theories, and methodologies. That parallelism in itself merits some attention, but is not my primary concern here.
In a paper submitted for a book edited by Ana Rojo and Iraide Ibarretxe-Antuñano, I presented some initial thoughts regarding the potential contributions of cognitive linguistics to TS. Among other things, one of my concerns was the effects this paradigm could (read: should) have on our conceptualization of our object of study. Without repeating that paper’s contents in full, I think it might be worthwhile to put some of those thoughts into this format, in the interest of discussion. The complete line of reasoning is presented as part of a much broader set of arguments in the full paper. As I was working on this short post, I also found that Celia Martín has presented some related ideas in her post of 18 May 2011, entitled ‘Transfer? What transfer?’ So here’s my two cents worth.
One of the arguments made in the paper mentioned above is that adopting the theoretical framework of cognitive linguistics leads us to adopt the perspective of the cognizing translator (though it does not limit us to this). From this perspective, i.e. that of the engaged and specifically situated mind, translation is a complex task in which active translators draw on and utilize various elements of their environments, as well as knowledge of various types, whether conscious and not. In addition, translators are subject to a number of constraints of different types. From this perspective, languages, cultures, situations, conventions, and affective states are all embodied in the personal mind (see also Chesterman 2000).
From this starting point of active, situated cognitive action it is hard to find a logical position for the traditional duality of languages, texts, cultures, that has permeated our field. Even though, at the end of the day, there are most often two real, physical texts, from this perspective there is no movement, no transfer, no crossing or carrying over. There are no sides. There is only one organic creative process and all of the factors that constitute it and impinge upon it.
As a complement to the theoretical view outlined above, there is evidence from bilingualism and psycholinguistic research that points towards a more unitary view. For instance, support for a more unified view of the creative act of translation is represented by the evidence for joint activation of languages in a bi- or multilingual speaker (see de Groot 2011:279-338 for an overview). Similarly, the view of the bi- or multilingual speaker as having a particular type of competence that is different from that of a monolingual speaker in any language suggests that linguistic relationships in bi- or multilingual speakers are much more intricate than previously thought (see de Groot 2011:339-403).
From translation process research, Martín (2011) cites evidence (Dragsted 2010) to suggest that text comprehension and text production are difficult to distinguish clearly in some translation, arguing that theorizing a distinct ‘transfer’ phase is inappropriate. Ruiz et al (2008) also identify translation processes in which these two activities coincide temporally. In other words, the isolation of a distinct source to target phase, at least for some translators, is being questioned. Also within translation process research, the current focus on models of translation competence (for instance, the TransComp model (Göpferich et al 2011) and  PACTE’s model (e.g. PACTE 2011) all build on the starting point of one cognizing individual and his or her integrated use of various kinds of skills or knowledge.
All of the above constitute areas in which research is ongoing, and there is not always a consensus on how to interpret results. However, for translation scholars, the theoretical discussions and empirical evidence are sufficient to motivate the question posed here. If theoretical models suggest a more organic activity, and if it is becoming empirically difficult to isolate the two languages in the translator and to separate stages of a translation process linked uniquely to the two languages involved, then perhaps it is time to face up to the logical consequences of these insights. Perhaps it is time to jettison conceptualizations built on duality and start from a notion of singularity.
In sum, taking a cognitive approach to translation provides theoretical and empirical reasons to adopt a critical stance towards traditional ‘duality’-based conceptualizations of translation. Our routine ways of thinking about this phenomenon (at least in traditional Western cultures) build on images of boundary crossing, and these images are limiting and in many ways downright wrong.
References
  • Chesterman, Andrew. 2000. A causal model for translation studies. @ Maeve Olahan, ed), Intercultural Faultlines. Research models in translation studies. Textual and cognitive aspects. Manchester: St. Jerome, pp. 15–27.
  • Chesterman, Andrew. 2009. The name and nature of translator studies. @ Hermes 42: 1322.
  • Dragsted, Barbara. 2010. Coordination of reading and writing processes in translation: an eye on uncharted territory. @ Gregory Shreve & Erik Angelone, eds. Translation and cognition. Amsterdam: John Benjamins, pp. 4162.
  • De Groot, Annette. 2011. Language and cognition inbilinguals and multilinguals. An introduction. New York: Psychology Press.
  • Göpferich, Susanne, Gerrit Bayer-Hohenwarter, Friederike Prassl & Johanna Stadlober. 2011. Exploring translation competence acquisition: criteria of analysis put to the text. @ Sharon O’Brien, ed. Cognitive explorations of translation. London: Continuum, pp. 5785.
  • Halverson, Sandra (forthcoming). Implications of Cognitive Linguistics for Translation Studies. @ Ana Rojo & Iraide Ibarretxe-Antuñano, eds. Cognitive linguistics and translation: Advances in some theoretical models and applications. Berlin: Mouton de Gruyter.
  • Martín de León, Celia. 2011. Transfer? What transfer? @ Petra’sBack Office Blog. Posted 18 May 2011. Accessed 29 September 2011.
  • Muñoz Martín, Ricardo. 2010. Leave no stone unturned. On the development of cognitive translatology. @ Translation and Interpreting Studies 5/2,:145162.
  • PACTE Group. 2011. Results of the validation of the PACTE translation competence model: translation project and dynamic translation index. @ Sharon O’Brien, ed. Cognitive explorations of translation. London: Continuum, pp. 3056.
  • Pym, Anthony. 2009. Humanizing translation history. @ Hermes 42: 2348.
  • Ruiz, C., N. Paredes, P. Macizo, & M. T. Bajo. 2008. Activation of lexical and syntactic target language properties in translation. @ Acta psychologica 128: 490500.

 

by Sandra Halverson.

Excellence in Interpreting

octubre 7th, 2011 Posted by bibliographical data, blog, expertise, interpreting, PhD project, The expert's perspective No Comment yet

Fresh out of interpreting school, I was lucky to start working with very experienced colleagues. Although I considered myself lucky working with them it was also a daunting experience in many respects. Naturally, I was filled with respects for these linguistic wizards with all their experience and flair. Most of them were also amazingly skilled in interpreting. I spent months, maybe even years listening, taking notes and trying to copy them. I think that I was also intimidated by the old interpreter teacher saying that “it takes at least five years to become a full-fledged interpreter”.

Years later I was struggling to start a career in research. As I was outlining my project, I came back to my early interpreting experiences and felt that I wanted to explore what made those interpreters so skillful. At this point my supervisor pointed me in the direction of the expertise approach. The expertise approach came from psychology became popular in interpreting studies in the late 1990s and early 2000s, when Barbara Moser-Mercer invited expertise guru, Karl Anders Ericsson to the Ascona workshops. Unfortunately, this was before my time and my understanding of expertise comes from the readings I have done in the subject.

The expertise approach in short says that very skilled performers (experts) share the same approach to their skill regardless of field. So Wimbledon-winners, Grand Masters of Chess, top Surgeons or even taxi-driver champions use the same way to develop their skill. Psychology researchers have studied high performers of different field and made several conclusions as to what is unique in their personality or approach or learning tricks. So what do they have in common?

Well, first of all they have been practicing and performing a lot, at least 10 years or 10 000 hours. But experience is a weak predictor, since you can play tennis for twenty-five years without winning Wimbledon or you can be a passionate chess player without being anywhere close to Grand Mastery. So there is something more to this than just experience.
The top performers also seem to have access to expert knowledge in times of need. Researchers saw that on routine tasks top performers would make a fairly average performance, and may even be outscored by less experienced subjects. However, when performing more difficult tasks they would excel.
Research into expertise also saw that practice was important, and top performers practiced differently than others. Their practice was deliberate. They were not just practicing; they were present in their practice. They practiced with clear goals and were open to feedback from their colleagues.
And lastly, just as one swallow does not a summer make – one single outstanding performance does not and expert make. Experts are regular top performers. They keep their performance on a regular high level.
The expertise approach seems to lend itself easily to investigate excellence in interpreting. There are some adaptation difficulties though. Interpreting is ephemeral in its nature, both because it is the spoken word, which fades quickly, but also because high quality or excellence is not easily graded. There is no ranking of interpreters; high quality in interpreting is context dependent. What is highly appreciated in one context may be utterly, totally wrong in another. Ericsson and his colleague Jacqui Smith have a three step general method for investigating expert performance. The steps consists of 1) a detailed analysis of the investigated domain and the skills necessary for experts in that domain, systematic mapping of cognitive processes for the specific skill, 2) Detailed analysis of the performance within the frames of general cognitive theory; identification of the systematic process and their link to the structure of the task and the behaviour of the performers, and 3) Presentation of the superior performance through the used cognitive processes and how they were acquired and the structure of the relevant domain knowledge.
Now, this blog post is approaching its end, and the discussion on how to operationalize these different steps will have to be dealt with next time. But just take step one and think about it for a moment; “systematic mapping of cognitive processes for the specific skill”. I know at least three mental models of interpreting, and the necessary cognitive processes are regularly the centre of discussions in Interpreting Studies, what are they and how do we study them. So clearly, using the expertise approach will require some more thinking.

Further reading

Ericsson, K.A. & J. Smith, eds. 1991. Toward a General Theory of Expertise: Prospects and Limits. Cambridge: Cambridge University Press.

Ericsson, K.A., N. Charness, P.J. Feltovich & R.R. Hoffman, eds. 2006. The Cambridge Handbook of Expertise and Expert Performance. New York: Cambridge University Press.

Ericsson, K. Anders 2010. “Expertise in interpreting”. @ Gregory M. Shreve & Erik Angelone, eds. Translation and Cognition. Amsterdam: John Benjamins, pp. 231–262.

 

by Elisabet Tiselius

TPRW2 Giessen 2011

octubre 2nd, 2011 Posted by blog, cognitive translatology, empirical research, TransWorld Airy Lines No Comment yet

In July 27-29, 2011, the 2nd international research workshop on the Methodology of Translation Process Research was held at the University of Giessen’s Rauischholzhausen Castle in Germany, organized by Susanne Göpferich. The program was very interesting and indeed, presentations lived up to their abstracts. Below you may read some highlights from most presentations. Full-screen viewing or PPS-file download recommended (embedded links).

captura-de-pantalla-2017-05-11-a-las-11-50-11
View more presentations from Zogoibi.

Recent Comments