Posts in empirical research

TREC: The Next Generation

junio 10th, 2013 Posted by blog, empirical research, methods, PhD project, TransWorld Airy Lines No Comment yet

The international research network “Translation Research Empiricism Cognition” (TREC) convened in Barcelona for a regular meeting on July 4-5 2013. This time we also held a previous seminar 

on empirical and experimental research in  translation, where PhD students presented their ongoing work. These are, in alphabetical order, some of the stars of TREC’s next generation:
 
José Jorge Amigo Extremera (PETRA, ULPGC) talked about “Fitting culture into Translation Process Research”, where he summarized his project to develop operationalizations of culture and knowledge for empirical and experimental research, drawing form social and situated cognition approaches.
 
Mariceli Aquino (LETRA, UFMG) presented “A relevance-theoretic study of processing effort in post-editing tasks: an analysis of German modal particles  “. She will be using Translog and Tobii T60 to study post-editions of the MT output of text excerpts from a corpus of articles from Deutsche Welle.
 
Claudine Borg (Aston University) is working on an in-depth case study of post-drafting self-revision of the translation a novel from French into Maltese through think-aloud, translator observation, interviews, analysis of drafts and ST-TT comparison drawing on corpus-based techniques.
 
Luis Miguel Castillo (PACTE, UAB) contributed with “Acceptability and the acquisition of translation competence: preliminary results”, where he described his goal of tracing the evolution of translation quality throughout the process of the acquisition of translation competence.
 
Norma Fonseca (LETRA, UFMG) draws from Krings (2001) to distinguish temporal, technical and cognitive aspects of effortful processing during task execution and builds on Alves & Gonçalves (2013) to study cognitive effort during monolingual post-editing processes using key logging, screen recordings, and guided written protocols.
 
Andrea Hunziker Heeb (ZHAW) struck a vital chord by focusing on ethical issues that may arise with professional translators as research participants. She used a general academic self-evaluation checklist and a code of good practice in research to frame her presentation, which fostered a lively discussion.

Andrea Hunziker Heeb and Annina Meyer (ZHAW) presented the design, the methods and the hypotheses of a research project focused on ergonomic issues associated with software settings, equipment, and/or physical conditions that might impede the  efficiency of translation by slowing down decision-making and other cognitive processes during translation.

Arlene Koglin (LETRA, UFMG)  presented her project “Processing effort and cognitive effects trade-off in metaphor post-editing.“ Arlene is using eye tracking, key logging and retrospective protocols to gather data, and Relevance Theory as a referential framework.
 
Minna Kumpulainen (University of Eastern Finland) presented an overview of the use of pauses as potential cognitive indicators in translation process research, where she centered on pause length and their correlation with process segment boundaries.
 
Gisela Massana Roselló (PACTE, UAB) presented the design and some methodological issues of her research project on the acquisition of translation competence in trainees who have Portuguese as their second foreign language. Language typological proximity between Portuguese and Spanish is a major concern in this project.
 
Christopher Mellinger (KETRA, Kent State University) is well advanced in his PhD research  project on how cognitive effort is distributed during the translation task, which he is analyzing through the pause contour of  applied cognitive effort when using a translation memory to translate. He presented some preliminary findings on how the use of TMs and specific fuzzy match features affect the translation process in Spanish-to-English translation professionals with 4–7 years of experience.
 
Ana Muñoz Miquel (GENTT, UJI) presented her ongoing work on medical translator’s profiles, where she combines the cognitive notion of translators’ competence with a sociological survey of medical translators self-image and a pedagogical perspective on the needs of medical translator trainees.
 
Christian Olalla Soler (PACTE, UAB)  will be using screen recording, translations and questionanaires to study the acquisition of translators’ cultural  competence by Spanish trainee students with German as their second foregin language, from the perspective of PACTE’s (2003) translation competence model.
 
Raphael Sannholm (Stockholm University) presented the results of his MA thesis, which checked whether  different text types give rise to different foci in the cognitive processes during translation within a fairly homogenous group of participants, and also outlined his future PhD project on on automaticity in the cognitive processes in translation.
 
Karina Szpak’s (LETRA, UFMG) research project applies the relevance-theoretical concepts of conceptually and procedurally encoded information to study eye fixations, time spent, and attention units to identify instances of processing effort in translation.
 

Anthony Pym on experimenting on/with students at the Monterey Institute of International Studies

mayo 1st, 2012 Posted by blog, empirical research, TransWorld Airy Lines No Comment yet

3rd International Translation Process Reseach Worskshop

marzo 22nd, 2012 Posted by blog, cognitive translatology, empirical research, TransWorld Airy Lines No Comment yet

The Iberian Society of Translation and Interpreting Studies (AIETI) has released a call for papers for its 6th International General Conference, to be held at ULPGC School of Translation & interpreting, in Las Palmas de Gran Canaria (Canary Islands, Spain), in January 23-25, 2013. Guest speakers include Franz Pöchhacker (University of Vienna), Fábio Alves (UFMG, Brazil), and Elena Pérez, President of the Spanish Association of Translators, Copy-editors and Interpreters. Papers, posters, panels and round tables are welcome. More information at the web of the conference.

Within the framework of the AIETI6 Conference, a parallel session will be held in January 21-22, 2013, which will focus on the Methodology of Translation and Interpreting Process Research, as a continuation of similar workshops organized by Dr Susanne Göpferich at the University of Graz (2009) and the University of Giessen (2011). Expected speakers include Fábio Alves (UFMG, Brazil), 

Erik Angelone (Kent State U., USA), Giselle de Almeida (DCU, Irleland), Allison Beeby  (PACTE, UAB, Spain), (Maureen Ehrensberger-Dow (ZHAW, Switzerland), Birgitta Englund Dimitrova (Stockholm University), Susanne Göpferich (U. of Giessen, Germany), Adelina Hild (Switzerland), Amparo Hurtado (PACTE, UAB, Spain), Isabel Lacruz (Kent State U.), Celia Martín (PETRA, ULPGC, Spain), Ricardo Muñoz (PETRA, ULPGC, Spain), Sharon O’Brien (DCU), Marisa Presas (PETRA, UAB, Spain), Marina Ramos (University of Murcia, Spain), Hanna Risku (U. of Graz, Austria), Ana Mª Rojo (University of Murcia, Spain), Elisabet Tiselius (U. of Bergen/Stockholm Univ.),Gregory M. Shreve (Kent State U./New York Univ.), and Šárka Timarová (Lessius Hogeschool, Belgium).

The times, they are a-changin’ (3/3)

enero 2nd, 2012 Posted by aside to camera, blog, empirical research, methods, reliability No Comment yet

In the first post of this threefold series, I praised e-journals and suggested that specialization might lead tooverall improvements in journal quality. Let us welcome TC3, a promising addition to the short list of specialized journals within Translation and Interpreting Studies. In the second one, I sketched some advantages of a few digital resources, and of using on-line tools to build a community of practice. Here I would like to address what I think amounts to a precondition for the success of any attempt of building a community of practice, namely setting high and common standards. 

I will go straight to the point: I would like to argue that combining reproducible research with open access may probably be the best option to bring about these standards.
 

Now you see it, now you don’t

You try to repeat a fascinating piece of research reported in a journal article. You follow it step by step, veeeery carefully, only soon to realize that a lot of missing bits make it unlikely that you are actually replicating the original test by any rigorous accounts, and settle for approximation instead of comparability. Your results are different, but you cannot really hypothesize why. —A colleague asks you for the way you carried out certain details in an old research project. It takes you long to find the appropriate materials, and even longer to reconstruct some of the rationals behind decisions that were obviously taken but perhaps never stated as explicitly as it seems to have become appropriate now. Yeap, sometimes some of us cannot even make sense of our own old materials.

Yet another typical case. You would like to find support for a given hypothesis and the setting you establish to put it to the test actually fits quite well in that of a previous piece of research, so that their data might really be of use to you. But you will never know because many of the conditions were implicit in the published paper, or had not been recorded. You may become unnecessarily stuck with a reduced number of subjects. Testing your new hypothesis in their old materials might have also been a sound previous step to carry out your planned research project, but you can’t because it seems very difficult to get ahold of these materials. You cannot even challenge the interpretation of data that was offered then by reanalyzing data now from a different perspective on the same issue, on the same materials, because they are not available.

These are only some of the problems we have to deal with in our daily research efforts to study translators’ and interpreters’ cognition. Process research methods have not yet been standardized enough, and the information structure and contents of research reports are still lacking field-wide guidelines (where field allows for several readings). In PETRA we are trying to contribute to method standardization by looking at ways to profile texts and subjects. But perhaps starting the other way round, i.e. setting standards for research reporting in future efforts, might simply be a more practical way to reach both goals. That is, setting report standards might prompt agreements on some methodological standards.

A simple strategy to draft tentative report guidelines might be letting information standards emerge from the demands of the readers of research reports. A fuzzy network of researchers who mutually recognize each other as pertaining to the field of translation and interpreting process research might, after a while, reach agreements on initial, tentative report and methodological standards by exchanging information of their works. They might simply do so by spotting new, additional information demands when trying to re-use the information to reach their own goals, until these information needs are streamlined and settled. In any case, it should be an additive and realistic process of study of past projects to be applied only to new ones.

Reproducible research

A publication is not the research project itself, but merely a report of that piece of research. That is why one single research project may lead to many publications. Current information and communication technologies allow for the exchange of all research-related information in digital formats of any kind (first post, again). Today’s digital scholarship comprises the publications plus the complete environment and the full set of instructions applied to carry out the project. Simply because we can? I don’t think so. Rather, we will benefit from accessing the whole lot. For instance, providing additional materials might enlarge the possibilities of other researchers to suggest more solid observations by re-using somebody else’s tools (e.g. questionnaires) and even some data, when possible.

Replicating research projects often yields different results in all fields, not just in ours. Statistics became a discipline to come to the rescue in such scenarios. However, often many differences in replicated results may be ascribed to lack of information in the research report. As I wrote above, digital media let us share a wealth of information which would make our job easier by letting us re-use each others’ data. The information should include—as many, when not in full—actual materials whenever possible, in order to make sure that future approaches will be able to analyze the real data, and not only filtered reports, which would become just one part of the information provided. In other words, publications would become points of access to further information on a research project, including raw materials, which altogether are seen as a compendium. A reproducible-research compendium for translation and interpreting process research might include

1. The research paper

  • Full text (e.g. a PDF)
  • Full bibliographic citation with current publication status (e.g. a BibTeX file)
  • Supporting bibliography (with abstracts, when possible)

2. The experimental setting

  • Explanatory documentations for each factor, if not standardized
  • A list of the parameters, settings, and platforms under which the project was carried out that lead to the published (also unpublished?) results
  • Original texts as presented to subjects
  • Subjects profiles [and those of other parties, such as evaluators, when applicable]

3. The Data

  • Raw output data (log files, video recordings, user activity data, questionnaires, tests, etc.)
  • Criteria used for data cleaning
  • Notes on data cleaning and preparation process
  • Prepared data (files after cleaning)
  • Explanatory documentations for each part of the process of manipulating data

4. Results

  • All the results including high resolution figures, complete tables, full statistical analyses when applicable, etc.
  • Explanatory documentations for each part of the analysis.
Such a compendium might raise concerns about data privacy and the release of potentially confidential information. This may simply be sorted out by choosing a combination of the toughest standards. For instance, we might request for proof that papers in compendiums have cleared their status regarding their publishers’ policies for self-archiving. At the same time, we definitely need to look for ways to record information that will guarantee anonymity while providing a maximum of data with the consent of the participants. At PETRA we are now exploring the possibilities of on-line data collection through a website (+ computer application) to let subjects participate from their usual working environments. Apart from our main goals, we are also interested in finding out whether there are any effects of going distant and on-line on the behavior of participants in general and on ecological validity, in particular. This includes the availability of participants, their willingness to take part in the project, and other factors such that might be affected by the research situation.

Establishing standards for a compendium such as the one sketched above is probably still far too ambitious a goal, but that’s what goals are for. Several items in the checklist above need further elaboration. Variability is not easy to reduce when studying such complex tasks as those carried out by translators and interpreters. Clarifying the contents and extent of such items is as good a starting point as any to advance to standardization. In any case, we need to set the tone and fine-tune our efforts to jointly perform the score. Staying committed to just the information provided by papers, books and book chapters is like trying to stick to your old vinyl records. We all love vinyl records, but just try to carve out a ringtone for your phone from your old 33 rpm’s or to stream them over from your stereo into your computer or your tv. Just find that song by what’s-his-face-again (musical categories have really fuzzy edges) or choose to listen music from twenty different records in a row without having to attend the device after each song. In translation and interpreting process research, we seem to be sticking to scholarly vinyls. However, project documentation initiatives such as TransComp are steps in the right direction.

Open access

Once an initial standard has been established, the next step is broader dissemination. Open access offers several advantages. From a technical scope, it allows for comparison, fosters rigor in both methods and reports, and may ease junior researchers into replicating projects. If the how-tos are completely available, repeating a research project is a suitable and manageable goal for MA candidates. A wider scope on the advantages of open access may also take into account the benefits of extending the community of practice to new members and promoting good practices at once. Gatekeeping should become less tricky when more supported by both evidence and general agreement. Also, compendiums might make science more transparent to the public, and accountability is one of the sources of legitimacy for scientific endeavors.

New research projects offering compendiums could be made available in one or several dedicated website repositories, only when abiding to standards or properly challenging them. New candidates to enlarge, reduce or modify the standards for research reports would need to prove the relevance of their proposal for new or established research goals and also their fit and interaction with the rest of variables already in the standard. Successive versions of this standard would accommodate new information demands. In other disciplines there are frequent splits in standards for competing frameworks, so we may expect standards to vary in different types of projects. That's ok. Probably, new communication venues might specifically target standard-abiding research reports.

There may be other strategies, but reproducible research and open access seem both to be taking roots in neighboring disciplines, including linguistics, reading research, writing research, cognitive science, and psychology, so perhaps it is worth to give them a try, ‘cause los tiempos están kambiando.
 
 
by R. Muñoz

TPRW2 Giessen 2011

octubre 2nd, 2011 Posted by blog, cognitive translatology, empirical research, TransWorld Airy Lines No Comment yet

In July 27-29, 2011, the 2nd international research workshop on the Methodology of Translation Process Research was held at the University of Giessen’s Rauischholzhausen Castle in Germany, organized by Susanne Göpferich. The program was very interesting and indeed, presentations lived up to their abstracts. Below you may read some highlights from most presentations. Full-screen viewing or PPS-file download recommended (embedded links).

captura-de-pantalla-2017-05-11-a-las-11-50-11
View more presentations from Zogoibi.

Int’l Conference on Translation Process Research

septiembre 2nd, 2011 Posted by blog, empirical research, TransWorld Airy Lines No Comment yet

Séverine Hubscher-Davidson and her colleagues at Aston University (Birmingham, UK) are organizing a one-day series of free online live webinars (audio and image streamed to your web browser with “Elluminate”) on several aspects of translation and interpreting process research for December 9, 2011. Speakers include Sharon O’Brien, Petra Klimant, Riita Jääskeläinen, Adelina Hild, Erik Angelone. Exchanges after each intervention will be arranged through a chat. No fee, but registration necessary. More information here.

When you think you got it

junio 24th, 2011 Posted by blog, data analysis, empirical research, empirical-inductive approach, The sorcerer's apprentice, theoretical-deductive approach No Comment yet

So you have been preparing the pilot study for your translation process research project with passion: You have been weighing the pros and cons of every single method and tool of data collection—you just want the best one, of course. You have been choosing the source texts meticulously, screening them from many different angles. You have been defining your experimental subjects and recruiting people, some of whom you finally managed to convince to participate in the test.

Then came the D-day: You carried out the experiment, or, properly speaking, you let your subjects carry it out—the subjects patiently translated with Translog (http://www.translog.dk); they also filled out some questionnaires. And, afterwards, evaluators had a look at their translations.

This is the moment when you start feeling that you made it, that the first big step is done, your first tentative data collected; and it’s true, but it’s also true that there is a much steeper step awaiting you now:

The analysis of data

(“Night on Bald Mountain” might do as a soundtrack effect here).

You may feel some kind of dizziness or trepidation in the face of the amount of data you have collected. So, now what? What’s next?

First of all: Stay calm and don’t despair!

There are two ways out of this trial:

 

  1. You could have a look at the materials you collected, those based on a theory you have been working out before. The theory offers you one or various perspectives onto your data (theoretical-deductive approach).

Or else

  1. You could let your data speak first and let the facts emerge and grow, and adapt your interpretation and your theory to the outcome (empirical-inductive approach).

If you were looking for something well defined, if it was an experimental setting, if you just wanted to (dis)prove some theory or theoretical point, if you —or your dissertation director— are not willing (!!) to modify the theory, then option 1 seems to better fit your needs. If you are doing descriptive research, if everything is foggy and you don’t trust the rosy & complex notions you have been using, then option 2 might get you further down the way.

The problem is, your research project may fall somehow in between, and anyway data collected are simply overwhelmingly rich. So, before choosing one way or the other, you may want to ask yourself the following questions (if you haven’t done so already):

What was I looking for?

Which elements in the data are useful for my purpose? Which are not?

How could I check what I wanted to see/know/measure?

When you have found the answers, start selecting your material. Focus on the data you really need for your study aim(s) and leave those aside you don’t need for this project; maybe later you can use them in another project, so don’t think they are worthless. Do NOT dispose of anything. Umberto Eco said that one of the main problems in writing a dissertation is chopping off side branches, I mean, mmm, reducing the scope of your goals to a size that can be managed in a few years. The times are over when a PhD research project was the crown of a whole career, welcome are now dissertations that let you prove you can do high quality research.

In any case, in translation process research you nearly must resort to triangulation, to cross-referencing qualitative and quantitative data, both to improve intersubjective agreement within your scientific community and to avoid the distortion effects of each single method.

So just take a deep breath now and keep going. Just do not go into the light. Research weather is always foggy.

 

Further reading

Shuttleworth, M. (2009). What is the scientific method? URL: http://www.experiment-resources.com/what-is-the-scientific-method.html
Wang, J. & Khosravi Sereshki, H. (2010). How to implement ITIL successfully? Jönköping. Chapter 2.2. URL: http://www.scribd.com/doc/52864057/13/Approaches-of-Deductive-Inductive-and-Abductive

Training on Research

junio 6th, 2011 Posted by blog, empirical research, methods, training, TransWorld Airy Lines No Comment yet

Ph.D Course in Translation Processes Research15 – 19 August 2011

Theoretical aspects of process research; experimental research design and methodology; data visualization and human translation process modeling; qualitative and quantitative data analysis; user interaction with language technological tools.

CBS Center for Research & Innovation in Translation & Translation Technology

Registration fee for PhD students: 190 €
Registration fee for university researchers: 250 €
Reduced registration fee for the immediately ensuing NLPCS workshop: 110 €.
Registration deadline: 15 July 2011 at noon.
Support requests deadline: 15 June 2011.
Writing Process Research 2011: Keystroke Logging and Eye Tracking

7-9 September 2011

Possibilities and limitations of keystroke logging and eye tracking; good practices for ethnographical and experimental writing process research; complementary nature of observation methods; writing process data exploration; data preparation for further analysis; statistical analysis of writing process data; networking.

University of Antwerp Training school
20 trainees max.
Registration fee 125€
Registration deadline:7 July 2011
Confirmation of participation & grants: 1 August 2011

Three good reasons for carrying out a pilot study

junio 3rd, 2011 Posted by blog, empirical research, functionality check, pilot study, preliminary results, reliability, The sorcerer's apprentice No Comment yet

Both in experimental and in descriptive research, a pilot study is a small-scale version or trial run of the main study. A pilot study has to be carried out under the same conditions of the main study. Otherwise carrying out a pilot study would not make much sense, since the main goal is to trace possible sources of error to avoid those in your main study. There are at least three good reasons for carrying out a pilot study before you carry out the full-scale experiment:

 

    1. Functionality check of the study design by testing

 

a) research tools and methods regarding adequacy

When choosing tools and methods for data collection, you should always consider your study aim(s): What do you really want to measure in your study? By which means can you achieve these aims? With a pilot study, you can check whether your methods and tools are optimal for your purposes.

b) study feasibility

A pilot study will show you if your study design works the way you planned. Maybe you will have to introduce some modifications, for example, regarding the experimental setting.

    1. Collecting preliminary results

A pilot study will give you some first tentative results which may show at least the potential trends in the future outcome of the main study. This will let you read more about the possible ways to interpret these results.

  1. Increasing research reliability

By doing a pilot study, the reliability of your research project will increase. Other researchers and (maybe even funding) institutions maybecome more interested in your project.

Take your time for a pilot study. It is one of the best investments of time and effort in order to make an excellent main study trial.

Further reading

Altman, D., Burton, N., Cuthill, I., Festing, M., Hutton, J. & L. Playle (2006). Why do a pilot study. National Centre for the Replacement, Refinement and Reduction of Animals in Research. URL: http://www.nc3rs.org.uk/downloaddoc.asp?id=400

Gilbert, N. (2001). The importance of pilot studies. Social Research Update, 35, URL: http://sru.soc.surrey.ac.uk/SRU35.html

Neunzig, W. (2002). Estudios empíricos en traducción: apuntes metodológicos. In F. Alves (Hrsg.): O proceso de traducão. Cadernos de Traducão, 10, 75-96.

 

by P. Klimant

Recent Comments