To date, the assessment of student translations has been largely based on configurations of error categories that address some facet of the translation product. Focal points of such product-oriented error annotation include language mechanics (punctuation, grammar, lexis and syntax, for example) and various kinds of transfer errors. In recent years, screen recording technology has opened new doors for empirically informing translation assessment from a more process-oriented perspective (Massey and Ehrensberger-Dow, 2014; Angelone, 2019). Screen recording holds particular promise when tracing errors documented in the product back to potential underlying triggers in the form of processes that co-occur on screen in their presence. Assessor observations made during screen recording analysis can give shape to process-oriented error categories that parallel and complement product-oriented categories. This paper proposes a series of empirically informed, process-oriented error categories that can be used for assessing translations in contexts where screen recordings are applied as a diagnostic tool. The categories are based on lexical and semantic patterns derived from a corpus-based analysis of think-aloud protocols documenting articulations made by assessors when commenting on errors made in student translations while watching screen recordings of their work. It is hoped that these process-oriented error categories will contribute to a more robust means by which to assess and classify errors in translation.
Angelone, E. (2010). Uncertainty, uncertainty management, and metacognitive problem solving in the translation task. In G.M. Shreve , & E. Angelone (Eds.), Translation and cognition (pp. 17–40). Amsterdam/Philadelphia: John Benjamins.
Angelone, E. (2015). The impact of process protocol self-analysis on errors in the translation product. In M. Ehrensberger-Dow , B. Englund Dimitrova , S. Hubscher-Davidson , & U. Norberg (Eds.), Describing cognitive processes in translation (pp. 195–214). Amsterdam/Philadelphia: John Benjamins.
Angelone, E. (2019). Process-oriented assessment of problems and errors in translation: Expanding horizons through screen recording. In S. Vandepitte , E. Huertas Barros , & E. Iglesias Fernandez (Eds.), Quality assurance and assessment practices in translation and interpreting (pp. 179–199). Hershey, PA: IGI Global.
Angelone, E. (2020). The impact of screen recording as a diagnostic process protocol on inter-rater consistency in translation assessment. The Journal of Specialized Translation, 34, 32–50.
Angelone, E. , & Shreve, G.M. (2011). Uncertainty management, metacognitive bundling in problem-solving, and translation quality. In S. O’Brien (Ed.), Cognitive explorations of translation (pp. 108–130). London/New York: Continuum.
Brunette, L. (2000). Towards a terminology for translation quality assessment: A comparison of TQA practices. The Translator, 6(2), 169–182.
Doherty, S. (2017). Issues in human and automatic translation quality assessment. In D. Kenney (Ed.), Human issues in translation technology. London: Routledge. 131–148.
Galán-Mañas, A. , & Hurtado Albir, A. (2015). Competence assessment procedures in translator training. The Interpreter and Translator Trainer, 9(1), 63–82.
Huertas Barros, E. , & Vine, J. (Eds.). (2019). New perspectives on assessment in translator education. London: Routledge.
Kornacki, M. (2019). The application of eye-tracking in translator training. In P. Pietrzak (Ed.), inTRAlinea. Special issue: New insights into translator training. Retrieved December 20, 2020 from http://www.intralinea.org/specials/article/2421.
Mariana, V. , Cox, T. , & Melby, A. (2015). The Multidimensional Quality Metrics (MQM) framework: A new framework for translation quality assessment. The Journal of Specialised Translation, 23, 137–161.
Massey, G. , & Ehrensberger-Dow, M. (2011). Investigating information literacy: A growing priority in translation studies. Across Languages and Cultures, 12(2), 193–211.
Massey, G. , & Ehrensberger-Dow, M. (2014). Looking beyond the text: The usefulness of translation process data. In J. Jan Engberg , C. Heine , & D. Knorr (Eds.), Methods in writing process research (pp. 81–98). Frankfurt: Peter Lang.
Mellinger, C. , & Shreve, G.M. (2016). Match evaluation and over-editing in a translation memory environment. In Martín, Ricardo Muñoz (Ed.), Re-embedding translation process research (pp. 131–148). Amsterdam/Philadelphia: John Benjamins.
Moorkens, J. , Castilho, S. , Doherty, S. , & Gaspari, F. (Eds.). (2018). Translation quality assessment: From principles to practice. Berlin: Springer.
Mossop, B. (2001). Revising and editing for translators. Manchester: St. Jerome.
Schäffer, M. , Nitzke, J. , Tardel, A. , Oster, K. , Gutermuth, S. , & Hansen-Schirra, S. (2019). Eye-tracking revision processes of translation students and professional translators. Perspectives: Studies in Translation Theory and Practice, 27(4), 589–603.
Shreve, G.M. , Angelone, E. , & Lacruz, I. (2014). Efficacy of screen recording in the other-revision of translations: Episodic memory and event models. MonTi: Monografías de traducción e interpretación, 1, 225–245.
TAUS (2016). TAUS quality dashboard. Retrieved November 18, 2020 from https://www.taus.net/data/dqf.