Search Results
You are looking at 1 - 1 of 1 items for
- Author or Editor: Bei Hu x
- Refine by Access: All Content x
Abstract
Translation assessment represents a productive line of research in Translation Studies. An array of methods has been trialled to assess translation quality, ranging from intuitive assessment to error analysis and from rubric scoring to item-based assessment. In this article, we introduce a lesser-known approach to translation assessment called comparative judgement. Rooted in psychophysical analysis, comparative judgement grounds itself on the assumption that humans tend to be more accurate in making relative judgements than in making absolute judgements. We conducted an experiment, as both a methodological exploration and a feasibility investigation, in which novice and experienced judges were recruited to assess English-Chinese translation, using a computerised comparative judgement platform. The collected data were analysed to shed light on the validity and reliability of assessment results and the judges’ perceptions. Our analysis shows that (1) overall, comparative judgement produced valid measures and facilitated judgement reliability, although such results seemed to be affected by translation directionality and judges’ experience, and (2) the judges were generally confident about their decisions, despite some emergent factors undermining the validity of their decision making. Finally, we discuss the use of comparative judgement as a possible method in translation assessment and its implications for future practice and research.