Posts in blog

Directionality research in translation and interpreting studies: A shorter than short history (1/2)

agosto 26th, 2013 Posted by blog, directionality, interpreting, inverse translation, methods, mother tongue, native speaker, translation No Comment yet


Most generally speaking, directionality research is concerned with the direction from and into which a translation or interpretation is carried out, and related concepts. In a way, the translation or interpreting direction is present in any translation or interpreting research, since a translation or an interpretation can always be construed to have been done from a specific written, spoken or signed language into another. Building on an old debate in the profession and anecdotal comments on the issue, which go back at least as far as Pliny the Younger, who in 85 CE advocated translating from Ancient Greek into Latin and vice versa (Robinson 2002), a specific field of translation and interpreting studies (TIS) has developed to investigate directionality.

Directionality-related questions that have been or could be addressed by translation and interpreting researchers include: What are the roots of concepts central to directionality and so-called inverse translation such as ‘(non-)native’ speaker or signer, ‘(non-)mother tongue’ or ‘foreign’ language? How similar/dissimilar are attitudes and norms towards translation or interpreting direction in the various ‘translation cultures’ (Prunč 1997, 2012; Schippel 2008, Grbić et al. 2010)? How homogeneous are the groups of ‘native’ and ‘non-native’ speakers or signers? How do ‘native’ and ‘non-native’ speakers or signers feel about being explicitly or implicitly classified as such? With regard to directionality effects, how similar/dissimilar are specific language pairs and genres/text types? What is the performance of a translator or interpreter out of his/her ‘mother tongue’ as opposed to into his/her ‘mother tongue’? What are the effects when multiple collaborators with different ‘mother tongues’ (such as translators, revisors or validators) are involved in the translation process? How does translation or interpreting direction affect reception? How useful is it to use directionality as an organizing principle of translation/interpreting courses or entire study programs? How much is the ‘nativeness’ factor stressed in job advertisements, how relevant is it to actual hiring practices, and what reasons are given for that?
Now let me give you a brief overview of the research that has been published on translation and interpreting directionality over the years, and provide you with some context. At first, a few scattered, pioneering studies appeared, whose focus was mostly directionality in translation (e.g., McAlester 1992, 2000; Beeby Lonsdale 1996, Marmaridou 1996, Campbell 1998, Stewart 1999, 2000a+b, 2008; Kocijančič Pokorn 2000a+b, Lorenzo 2002, 2003). These studies were followed by a growing number of publications dealing with directionality in interpreting (e.g., Tommola/Helevä 1998, Al-Salman/Al-Khanji 2002, Lim 2005, Monti et al. 2005, Bartłomiejczyk 2006, Chang/Schallert 2007, Bendazzoli 2010, Opdenhoff 2011). In the noughties, when directionality research seems to have enjoyed its heyday in TIS, directionality-dedicated conference proceedings and a thematic special issue were published (Grosman et al. 2000, Kelly et al. 2003, Godijns/Hinderdael 2005). A fair number of works produced during that period approached directionality from an emancipatory perspective, which is attested to by titles such as Challenging the Traditional Axioms: Translation Into a Non-Mother Tongue (Pokorn 2005) or Into Forbidden Territory: The Audacity to Translate into a Second Language (Feltrin-Morris 2008).

Most recent contributions to directionality research have been made in translation process research (e.g., Hirci 2007, Alves et al. 2009, Pavlović/Jensen 2009, Maier 2011, Chang 2011, Wimmer 2011, Ferreira Alves 2010, 2012; Rodríguez/Schnell 2012, Ferreira 2014, Barbosa de Lima Fonseca 2015, Hunziker Heeb 2016, Ferreira/Schwieter 2017). Because of the nature and research designs of process studies in TIS, this does not come as a surprise. In the often quantitative studies typical of this TIS research tradition, directionality or related concepts may appear as explicitly spelled-out, highly visible dependent, independent or control variables (Krings 2005 mentions translation direction as one of his “task factors”).

Directionality has now been established as an important issue in TIS, and received its own entry in major reference works (Shuttleworth/Cowie 1997, Delisle et al. 1999, Beeby 2009, Palumbo 2009, Pokorn 2011, Bartłomiejczyk 2015).

TIS researchers will keep exploring the topic of directionality, also from new angles; directionality has recently become an issue in (improving) statistical machine translation and sign language interpreting (van Dijk et al. 2011, Wang/Napier 2013, Nicodemus/Emmorey 2013, 2015; Wang/Napier 2015, Wang 2016) and third language interpreting (Crasborn/van Dijken 2009, Topolovec 2012), for example. Recent conference (conference 1, conference 2), meeting (meeting 1, meeting 2) and workshop contributions, including the occasional keynote, a special journal section, a special issue on English as a lingua franca and translator/interpreter education including reflections on directionality, observations on directionality in so-called non-Western ‘translation cultures’, a survey reporting on the perception and role of translation direction in the Spanish technical translation market, and ongoing (post-)doctoral research projects are signs that the patient is alive and kicking.

In the rest of the post I would like to turn to conceptual and methodological issues. Taking a cue from other disciplines that have language(s) and communication as their object of study (e.g., Paikeday 1985, Piller 2001, 2002; Davies 2003, 2013; Bonfiglio 2013, Hulstijn 2015), we might want to be more careful when defining and operationalizing key concepts in directionality research. For instance, the criteria for assigning study participants to the ‘native’ or ‘non-native’ group are not always transparent. The linguist Tove Skutnabb-Kangas mentions origin, function, competence and/or identification as criteria we could rely on to determine a person’s mother tongue(s). To avoid threats to our studies’ credibility, we should thus perhaps be more cautious about ‘I know one when I see one’.

In the second and final part of this two-part blog post, I am going to talk about methodology in directionality research and will suggest methodological improvements (such as blinding) to avoid biased results.

  • Feltrin-Morris, Marella. 2008. Into forbidden territory. The audacity to translate into a second language. PhD thesis, Binghamton University/SUNY.


by Matthias Apfelthaler

TREC: The Next Generation

junio 10th, 2013 Posted by blog, empirical research, methods, PhD project, TransWorld Airy Lines No Comment yet

The international research network “Translation Research Empiricism Cognition” (TREC) convened in Barcelona for a regular meeting on July 4-5 2013. This time we also held a previous seminar 

on empirical and experimental research in  translation, where PhD students presented their ongoing work. These are, in alphabetical order, some of the stars of TREC’s next generation:
José Jorge Amigo Extremera (PETRA, ULPGC) talked about “Fitting culture into Translation Process Research”, where he summarized his project to develop operationalizations of culture and knowledge for empirical and experimental research, drawing form social and situated cognition approaches.
Mariceli Aquino (LETRA, UFMG) presented “A relevance-theoretic study of processing effort in post-editing tasks: an analysis of German modal particles  “. She will be using Translog and Tobii T60 to study post-editions of the MT output of text excerpts from a corpus of articles from Deutsche Welle.
Claudine Borg (Aston University) is working on an in-depth case study of post-drafting self-revision of the translation a novel from French into Maltese through think-aloud, translator observation, interviews, analysis of drafts and ST-TT comparison drawing on corpus-based techniques.
Luis Miguel Castillo (PACTE, UAB) contributed with “Acceptability and the acquisition of translation competence: preliminary results”, where he described his goal of tracing the evolution of translation quality throughout the process of the acquisition of translation competence.
Norma Fonseca (LETRA, UFMG) draws from Krings (2001) to distinguish temporal, technical and cognitive aspects of effortful processing during task execution and builds on Alves & Gonçalves (2013) to study cognitive effort during monolingual post-editing processes using key logging, screen recordings, and guided written protocols.
Andrea Hunziker Heeb (ZHAW) struck a vital chord by focusing on ethical issues that may arise with professional translators as research participants. She used a general academic self-evaluation checklist and a code of good practice in research to frame her presentation, which fostered a lively discussion.

Andrea Hunziker Heeb and Annina Meyer (ZHAW) presented the design, the methods and the hypotheses of a research project focused on ergonomic issues associated with software settings, equipment, and/or physical conditions that might impede the  efficiency of translation by slowing down decision-making and other cognitive processes during translation.

Arlene Koglin (LETRA, UFMG)  presented her project “Processing effort and cognitive effects trade-off in metaphor post-editing.“ Arlene is using eye tracking, key logging and retrospective protocols to gather data, and Relevance Theory as a referential framework.
Minna Kumpulainen (University of Eastern Finland) presented an overview of the use of pauses as potential cognitive indicators in translation process research, where she centered on pause length and their correlation with process segment boundaries.
Gisela Massana Roselló (PACTE, UAB) presented the design and some methodological issues of her research project on the acquisition of translation competence in trainees who have Portuguese as their second foreign language. Language typological proximity between Portuguese and Spanish is a major concern in this project.
Christopher Mellinger (KETRA, Kent State University) is well advanced in his PhD research  project on how cognitive effort is distributed during the translation task, which he is analyzing through the pause contour of  applied cognitive effort when using a translation memory to translate. He presented some preliminary findings on how the use of TMs and specific fuzzy match features affect the translation process in Spanish-to-English translation professionals with 4–7 years of experience.
Ana Muñoz Miquel (GENTT, UJI) presented her ongoing work on medical translator’s profiles, where she combines the cognitive notion of translators’ competence with a sociological survey of medical translators self-image and a pedagogical perspective on the needs of medical translator trainees.
Christian Olalla Soler (PACTE, UAB)  will be using screen recording, translations and questionanaires to study the acquisition of translators’ cultural  competence by Spanish trainee students with German as their second foregin language, from the perspective of PACTE’s (2003) translation competence model.
Raphael Sannholm (Stockholm University) presented the results of his MA thesis, which checked whether  different text types give rise to different foci in the cognitive processes during translation within a fairly homogenous group of participants, and also outlined his future PhD project on on automaticity in the cognitive processes in translation.
Karina Szpak’s (LETRA, UFMG) research project applies the relevance-theoretical concepts of conceptually and procedurally encoded information to study eye fixations, time spent, and attention units to identify instances of processing effort in translation.

Comprobación de la coherencia léxica con petraREV

mayo 27th, 2013 Posted by blog, coherencia, comprobación, herramientas, localización, petraREV, revisión, revisión asistida No Comment yet

Los archivos bilingües utilizados con tanta frecuencia en localización resultan muy útiles cuando un revisor, ya sea autónomo o asalariado, debe trabajar sobre un texto. Hay muchas maneras eficaces de comprobar que la terminología de una traducción es la correcta. Por ejemplo, podemos realizar un vaciado terminológico de estos archivos, bien manualmente o bien mediante un sistema de extracción automática de términos, y recurrir a una herramienta que garantice que en cada ocasión se ha utilizado el término correcto. Lamentablemente, la mayoría de estas posibilidades queda fuera del alcance de numerosos traductores y revisores, para quienes el tiempo que pueden dedicar a la tarea no permite emplear estos métodos.


Además, aún cuando optaran por extraer la terminología presente en un texto, al no existir un límite perfectamente definido entre qué es un término y qué no lo es, muchas incoherencias terminológicas pueden pasar desapercibidas. Algunas palabras extremadamente sencillas y aparentemente sinónimas, como añadir y agregar, por triviales que parezcan a primera vista, tal vez merezcan tratarlas como términos en contextos donde haya unas reglas estrictas sobre la preferencia por una de ellas.

Por último, estas comprobaciones terminológicas pueden ser precisamente más necesarias cuando no se dispone del tiempo adecuado para someter la traducción a una revisión minuciosa y, por lo tanto, menos aún para confeccionar glosarios.

En estos casos, es mejor contar con que el revisor solo va a poder dedicarle un tiempo cero a estos menesteres y únicamente querrá ver resultados significativos que hagan que el tiempo invertido en examinarlos sea más provechoso que una revisión manual del texto.

Una posibilidad que a simple vista puede parecer obvia es crear una especie de macroglosario que contenga información global sobre la correspondencia de términos en una combinación de idiomas particular. Por ejemplo, podemos pensar que cada vez que aparezca Spain debe traducirse como España, cuando aparezca Canada debe traducirse como Canadá, etc.

Lamentablemente, a poco que nos pongamos a crear este macroglosario nos daremos cuenta de que su utilidad disminuye a medida que incorporamos más términos. Retomando el ejemplo sobre añadir y agregar, si introducimos ambas posibilidades como traducciones aceptadas del verbo add, el sistema pierde la capacidad de detectar el uso incoherente de estos términos.

El problema radica en que los textos suelen ser muy regulares cuando nos limitamos a uno concreto, pero muy irregulares entre sí, esto es, cuando cambiamos de uno a otro. De esta manera, realmente podemos crear un macroglosario para comprobar la terminología de un texto en particular, pero no podemos aplicarlo directamente, porque antes debemos ajustarlo a ese texto que queremos revisar.

Llegados a este punto, se abren dos posibilidades: la primera es emular los sistemas estadísticos de entrenamiento y reservar una parte del texto para entrenar nuestro macroglosario y, a continuación, aplicar las enseñanzas derivadas de esa parte al resto del texto. Por ejemplo, podemos revisar concienzudamente un 10% o un 20% del texto y luego ver si el texto cumple los criterios que implícitamente se establecen en esa parte.

Aunque el esfuerzo que debe realizar el revisor se reduce bastante, ya que solo debe establecer una división entre lo que se revisa y lo que no, aún es posible buscar un método más eficiente que lo libere incluso de tener que tomar esta decisión, gracias a una curiosa particularidad de los errores. Y es que muchos de los errores más graves suelen ser extremadamente poco frecuentes. Por tanto, si una línea de un texto contiene un error, no es descabellado suponer que ese error solo aparece en esa línea y, por tanto, podemos utilizar el resto del texto para entrenar nuestro glosario con la información que nos permita detectarlo.

Para ilustrar este método vamos a ver un ejemplo muy sencillo. Imaginemos que en nuestra recopilación de posibles traducciones, hemos especificado dos posibles traducciones para el término expiration: caducidad y vencimiento. Con esta información queremos revisar los siguientes dos segmentos:

The expiration date cannot be earlier than today.
La fecha de caducidad no puede ser anterior a la actual.
Segmento 1

The expiration date is not valid.
La fecha de vencimiento no es válida.
Segmento 2

Al aplicar el algoritmo propuesto al segmento 1, se detectaría que expiration se ha traducido como vencimiento en el resto de la traducción (en este caso el segmento 2). Por tanto, al no encontrar este término se mostraría una advertencia.

Por supuesto, este método presenta varios inconvenientes. Por ejemplo, en cuanto un error se repite en dos líneas, el sistema pierde la capacidad de detectarlo, lo que lo invalida para detectar errores recurrentes. No obstante, ofrece una nueva manera de detectar errores que pasan desapercibidos en la mayoría de las comprobaciones que realizan las aplicaciones de revisión asistida en la actualidad, con la ventaja añadida de crear muy poco ruido.

Cerrado por vacaciones / Closed for holiday

julio 11th, 2012 Posted by blog No Comment yet


Copiar/pegar para traductores

junio 19th, 2012 Posted by blog, CAT systems, herramientas, petraREV, translation tools No Comment yet

Aunque los avances en los modernos procesadores de textos y en los programas de traducción asistida han facilitado notablemente las tareas del traductor, traducir un texto sigue implicando un buen número de actividades mecánicas. Por ejemplo, al traducir la documentación de un software, el traductor debe asegurarse de que la traducción de cada elemento (botón, cuadro de diálogo, mensaje, etc.) corresponde a la que después le aparecerá al usuario en pantalla. Para ello se usa un volcado del software donde figura el texto en el idioma de origen junto a su traducción al idioma de destino. Esta tarea puede automatizarse con las funciones de gestión de terminología que incorporan algunas soluciones de traducción asistida. No obstante, a menudo el cliente impone al traductor herramientas que no incluyen este tipo de funciones, por lo que la herramienta que normalmente acaba utilizándose para realizar estas búsquedas es de propósito general, como el sencillo Bloc de Notas o el conocido Search&Replace.


Para agilizar estas tareas, la nueva versión de petraREV incluye una nueva función, ubicada en una nueva pestaña del cuadro de diálogo Buscar. Esta pestaña, llamada Buscar y copiar y descrita en detalle en la ayuda en línea de petraREV, ejecuta en cadena las diversas operaciones para buscar un elemento en un glosario: extrae automáticamente el texto del portapapeles, busca si existe una traducción en el glosario indicado y avisa al usuario si ha encontrado una o varias traducciones, copia al portapapeles el resultado y lo resalta con un código de color. Así, no solo acelera la labor del traductor, sino también reduce la cantidad de pasos (teclas y movimientos de ratón) por realizar.

La nueva función de petraREV propone un modelo que, en breve, intercalar un paso intermedio, «transformar», en la habitual operación «copiar/pegar». En ese paso intermedio se aplica una operación al texto copiado. En la búsqueda de terminología, la transformación consiste en buscar en un glosario, pero la transformación puede incluir varias operaciones para ejecutar tareas que, de otra manera, el traductor debería realizar mecánicamente. Por ejemplo, los cambios de terminología que implican un cambio de género, como cambiar «ordenador» por «computadora», se pueden llevar a cabo encadenando diversas operaciones de sustitución con los determinantes más habituales. Esta estrategia no ahorra el paso de leer cada frase para detectar si hay algún elemento que también precisa un cambio adicional (como, por ejemplo, los adjetivos), pero la mayoría de los cambios se pueden realizar no solo con mayor rapidez, sino también con más precisión.
La cuestión consiste, por tanto, en determinar cuáles son las operaciones que pueden facilitar el trabajo. Buscar en glosarios, reemplazar un texto por otro y cambiar el uso de mayúsculas y minúsculas son las primeras candidatas, aunque este enfoque modular permite crear otras más complejas con el fin de estudiar su comportamiento en determinadas situaciones. Por ejemplo, cuando se trabaja con un reducido número de palabras que aparecen en diversas combinaciones, y a través de operaciones de inversión y sustitución de elementos, se puede crear una versión reducida de un motor de traducción automática útil para traducir software o listas de palabras clave para casos concretos. Este método, mediante el que a partir de un texto de origen se crea un texto de destino, que recuerda vagamente a la gramática generativa de Chomsky, ha demostrado sus flaquezas, si bien su aplicación en ciertos ámbitos no solo podría ser útil en la vida real, sino que también ayudaría en el intento de dilucidar qué ocurre en la mente de un traductor mientras realiza su trabajo. Por el momento, la lista de operaciones solo incluye cuatro opciones, pero esta lista puede ampliarse a voluntad. ¿Qué operaciones podrían hacer que esta función fuera más útil?

Anthony Pym on experimenting on/with students at the Monterey Institute of International Studies

mayo 1st, 2012 Posted by blog, empirical research, TransWorld Airy Lines No Comment yet

Investigating Expertise in Interpreting

abril 12th, 2012 Posted by bibliographical data, blog, expertise, interpreting, PhD project, The expert's perspective No Comment yet

I ended my last blog post on the expertise approach by saying that using the expertise approach to study interpreting will require some more development of the different constructs. I was talking about how to operationalize Ericsson’s and Smith’s three step general method for investigating expert performance. The first step says that the researcher should start with “a detailed analysis of the investigated domain and the skills necessary for experts in that domain and a systematic mapping of cognitive processes for the specific skill”.

As far as I know, there is no exhaustive analysis of the skills necessary for experts in the domain of interpreting, but there are several proposals of lists or typologies. And all of us involved in interpreting can come up with longer or shorter lists. In fact just think about the description of what you need to start interpreting school:

  • – Perfect command in most domains of your mother tongue and at least perfect understanding of the foreign languages you work from.
  • – Ability to adapt quickly from one situation to the other.
  • – Ability to grasp quickly, conclude and anticipate next step.
  • – Ability to quickly formulate in another language what you have just heard in one language.
  • – Ability to listen and speak simultaneously (at least if you work with simultaneous interpreting).

This is by no means an exhaustive list it just gives an idea of what we have to deal with, when analyzing the skills necessary for experts. Another tricky thing is that a person can very well master these skills without having the ability to interpret, let alone become an expert interpreter. I know many people who have perfect native levels in two languages who are not interpreters, but neither would or could interpret. Maybe you do too.

Expert research is a fairly active field in interpreting and translation, but I only think we have begun to map the cognitive processes for the specific skill. One hypothesis is that experts’ working memory would be better developed than other performers. This has been investigated by for instance Minhua Liu. Liu found that experienced interpreters had a more efficient allocation of working memory than less experienced interpreters, by the way, you can find a very interesting talk that Dr. Liu gave at the Monterey Institute of International Studies here.

Anticipation is also a field presumably of importance of interpreters. A volume by Chernov (edited by Setton and Hild) is dedicted to anticipation and Bartlomiejczyk recently returned to the understanding of anticipation.

Another interesting way to investigate expertise is to look at how experienced interpreters (and possibly experts) deal with situations compared to less experienced interpreters or novices. Then we have already jumped to step two of Ericsson’s list, namely, “detailed analysis of the performance within the frames of general cognitive theory; identification of the systematic process and their link to the structure of the task and the behaviour of the performers”.

As I’m impatient by nature, I did what other’s have done before me – I jumped to step two. I have looked at what experienced interpreters do when they encounter problems compared to novice interpreters. I am by no means the first one to do this either. I have very much followed the work of Adelina Hild who developed a classification of which processing problems interpreters experienced and how they dealt with them (i.e. which strategies they used to deal with the problems occurred). For example, let’s say that an interpreter did not hear a particular word. What happens? Does s/he omit that word or even the whole sentence? Or does s/he invent something else? Or does s/he infer from the context what was lost?

It may come as no surprise that both for me and for Hild, experienced interpreters encounter fewer problems and have more strategies at hand to deal with them. In the example above, the experienced interpreter would most likely be able to infer from the context the word that was not heard, whereas the novice would most likely omit at least the word maybe the sentence.

By looking at what really experienced interpreters do and don’t compared to less experienced interpreters we may approach those necessary skills for our domain. For instance, the experienced interpreters I looked at encountered processing problems much less than the novices. Considering that they were very experienced it’s not surprising, but what’s interesting is that when they encountered a problem they had more strategies to choose from. Novice interpreters often choose to omit parts of the message, and when they did not omit they accepted a lower standard of the utterance or quite simply invented something. Experienced interpreters preferred to generalize in difficult situations, they could also choose to summarize or restructure the utterance. They did of course omit or accept a lower standard as well, but much less so than the novices. So, it looks like one skill interpreting experts have is mastery of a wide range of interpreting strategies in order to convey the message.

My space is once again up, but I’ll continue to pursue the expertise approach in my next post. And let’s see then if we can look at the third point in the list: “presentation of the superior performance through the used cognitive processes and how they were acquired and the structure of the relevant domain knowledge”.

Authors referred to:

Chernov, Gelij V. 2004. Inference and anticipation in simultaneous interpreting: a probability-prediction model. Amsterdam and Philadelphia. John Benjamins.

Bartlomiejczyk, Magdalena. 2008. “Anticipation: a controversial interpreting strategy”, In B. Lewandowska-Toamsczyk and Thelen, M. (eds.) Translation and Meaning 8.Maastricht: Zuyd University

Ericsson, K.A. 2000. “Expertise in Interpreting: An Expert-performance Perspective”.Interpreting: International Journal of Research and Practice in Interpreting. 5:2.187–220.

Ivanova (Hild), A. 1999. Discourse Processing During Simultaneous Interpreting: An Expertise Approach. Unpublished doctoral dissertation, University of Cambridge.

Liu, M. 2001. Expertise in Simultaneous Interpreting: A Working Memory Analysis. Unpublished doctoral dissertation, the University of Texas at Austin.


by Elisabet Tiselius

3rd International Translation Process Reseach Worskshop

marzo 22nd, 2012 Posted by blog, cognitive translatology, empirical research, TransWorld Airy Lines No Comment yet

The Iberian Society of Translation and Interpreting Studies (AIETI) has released a call for papers for its 6th International General Conference, to be held at ULPGC School of Translation & interpreting, in Las Palmas de Gran Canaria (Canary Islands, Spain), in January 23-25, 2013. Guest speakers include Franz Pöchhacker (University of Vienna), Fábio Alves (UFMG, Brazil), and Elena Pérez, President of the Spanish Association of Translators, Copy-editors and Interpreters. Papers, posters, panels and round tables are welcome. More information at the web of the conference.

Within the framework of the AIETI6 Conference, a parallel session will be held in January 21-22, 2013, which will focus on the Methodology of Translation and Interpreting Process Research, as a continuation of similar workshops organized by Dr Susanne Göpferich at the University of Graz (2009) and the University of Giessen (2011). Expected speakers include Fábio Alves (UFMG, Brazil), 

Erik Angelone (Kent State U., USA), Giselle de Almeida (DCU, Irleland), Allison Beeby  (PACTE, UAB, Spain), (Maureen Ehrensberger-Dow (ZHAW, Switzerland), Birgitta Englund Dimitrova (Stockholm University), Susanne Göpferich (U. of Giessen, Germany), Adelina Hild (Switzerland), Amparo Hurtado (PACTE, UAB, Spain), Isabel Lacruz (Kent State U.), Celia Martín (PETRA, ULPGC, Spain), Ricardo Muñoz (PETRA, ULPGC, Spain), Sharon O’Brien (DCU), Marisa Presas (PETRA, UAB, Spain), Marina Ramos (University of Murcia, Spain), Hanna Risku (U. of Graz, Austria), Ana Mª Rojo (University of Murcia, Spain), Elisabet Tiselius (U. of Bergen/Stockholm Univ.),Gregory M. Shreve (Kent State U./New York Univ.), and Šárka Timarová (Lessius Hogeschool, Belgium).

¿Piensas en horizontal o en vertical?

marzo 8th, 2012 Posted by blog, herramientas, informática, TAO, traducción asistida por ordenador No Comment yet
Al presentar al traductor el texto por traducir, los programas de traducción asistida utilizan interfaces muy diferentes. Un posible criterio para clasificarlas consiste en distinguir entre las que presentan los segmentos como líneas separadas por determinados caracteres (figura 1) y las que presentan los segmentos en forma de tabla (figura 2).

Figura 1. Presentación de los segmentos en forma de líneas (horizontal)


Figura 2. Presentación de los segmentos en forma de tabla (vertical)


En el primer grupo podemos incluir la mayoría de las versiones del popular Trados, así como otros programas que parecen haberse inspirado en este sistema de trabajo, tales como WordFast Classic, Translation Workspace, Gtranslator o Launchpad Translations. En el segundo grupo, que probablemente resulte más nutrido, se cuentan Idiom Workbench, SDLx, SDL Trados y POedit, entre otros.
Aparte de la diferente manera de presentar los segmentos, los programas del primer grupo vemos también se caracterizan por incluir el texto intersticial de los segmentos, es decir, el texto que no es necesario traducir (por ejemplo, las imágenes de los documentos redactados con un procesador de textos o las etiquetas de los documentos escritos en un lenguaje de maquetación). Esta es, probablemente, la razón que motiva la elección de la presentación en tabla, ya que una distribución por líneas habría obligado a duplicar en pantalla una importante cantidad de información, sin contar con las dificultades técnicas que habría podido suponer.
La cuestión es que, independientemente de las razones por las que haya sido tomada en cada caso, esta decisión no suele ponerse en cuestión a pesar de su impacto sobre el trabajo del traductor, tanto en lo relativo a su calidad como a su rapidez. Desde luego, el método de trabajo es similar en ambos casos, ya que la mayoría de los traductores suelen comenzar a traducir un segmento copiando exactamente el texto de origen para, a continuación, sobrescribir (o machacar) el texto en la lengua de origen por la correspondiente traducción. Este método presenta la ventaja de que minimiza la posibilidad de cometer errores al tener que reproducir (o cortar y pegar) elementos como cifras, etiquetas, nombres propios, etc.
Aun así, cuando el texto se presenta en forma de columnas vemos que el menor espacio horizontal disponible obliga al traductor a trabajar con un mayor número de líneas y, curiosamente, parece que determinados errores abundan más en los textos traducidos con este tipo de interfaz. Por ejemplo, parece más habitual repetir palabras, dejar dobles espacios y, tal vez, cometer errores de concordancia entre palabras de líneas diferentes.
Al revisar el texto, la situación cambia y la presentación en forma de tabla resulta más cómoda para seguir el texto de la traducción y recurrir al texto de origen con facilidad. La presentación con el formato lineal, por otro lado, sólo resulta aconsejable cuando se puede ocultar el texto de origen y ver el texto de la traducción con el formato para el que está diseñado.
A pesar de los grandes avances que han sufrido los ordenadores durante los últimos años, resulta curioso que haya aplicaciones, como los sistemas de traducción asistida, que se han aferrado a los viejos esquemas y no incorporan conceptos ya muy populares en la informática. Por ejemplo, las aplicaciones dirigidas al gran público ya han descubierto el valor que los usuarios confieren a poder personalizar la interfaz (y hasta el comportamiento) de una aplicación a su gusto. Incorporar estas opciones a los sistemas de traducción asistida permitiría agilizar el trabajo del traductor y elevar la calidad de los textos.
Más allá de la metáfora del papel, en la que se representan en pantalla las traducciones de manera similar a como se imprimirían en un libro, una interfaz de ordenador ofrece infinitas posibilidades de presentación visual utilizando, por ejemplo, colores y signos, que nos permitirían observar el texto desde diferentes puntos de vista. Un buen comienzo, por ejemplo, podría ser permitir que el usuario pudiese elegir entre la manera en que se le presentan los segmentos por traducir y que su espacio de trabajo en pantalla deje de ser un minúsculo recuadro de la pantalla rodeado de información que no le interesa.

¿Qué interfaz de traducción asistida prefieres?

You can quote me on this one (4/4)

febrero 13th, 2012 Posted by blog, citation styles, electronic sources, The sorcerer's apprentice No Comment yet

This one got me electrified
In the first post of the series on referencing in scientific works we saw what to cite, in general. In the second one, we focused on how to cite title identifiers and how to apply the author-date system. In the third one, we offered some info about major world naming systems. This fourth and last post rounds off the series with a short overview on how to handle references to electronic sources.

Web sites, e-mails, DVDs, journal articles (on the web) are all electronic sources, to name but a few. As a rule of thumb, when it comes to reference electronic sources, they are to be handled with the same criteria as printed media.

For in-text citations, Harvard suggests naming the author, the year of publication, and the page number(s) when available; for detailed information on the Harvard style, see the second post of this series.

The thing gets a little more complicated when listing the references at the end of your work, as it is not always that easy to know, what to include in your reference and where to find this information. Basically, you should give as much data as possible on the authorship, source location and availability. Remember that, when some data are missing, there are appropriate abbreviations to indicate so, such as [s. n.], for sine nomine (i.e. no name of publisher), [s. d.], for sine datum (no date of publication) and [s. l.], for sine loco (no place of publication).

As we already learned in the second and third post of the series, citing styles are all quite alike but they tend to differ to some extent from each other. When no guidelines are provided, being consistent is even more important than the style you choose; that’s still valid for referencing of electronic sources: While Harvard, for example, recommends giving additional information on the electronic access, location (such as URL or a data base) and on access data (that is, when the source was viewed or downloaded), APA proposes citing the latter extra information only in case of a web site which frequently moves to another (virtual) place. To put some order in this chaos, a numbering system to identify electronic documents has been established that is becoming increasingly popular. Here you will find some information of the Digital Object Identifier system.

Check the four main styles addressed in the other posts of the series, APA; Chicago; Harvard; and MLA. In any case, the location and availability of the document are additional pieces of information which should be provided, perhaps also their DOI number when available. The date of consultation is also important, for many documents disappear after a while, specially after website updates. If you ever look for a reference in an article and it turns out it already disappeared, do not forget to check Internet archives such as the WayBack Machine, in case they have kept the earlier version there.

The world of citing

The four posts of this series are supposed to be some kind of a map of referencing; whatever route you may choose on your journey through the world of citing, at the end of the day the most important thing is to have traveled it entirely.

Bon voyage!


by P. Klimant

Recent Comments