Search Results

You are looking at 1 - 2 of 2 items for

  • Author or Editor: Enikő Héja x
  • Refine by Access: All Content x
Clear All Modify Search

Multilingual term extraction from parallel corpora – A methodology for the automatic extraction of verbal structures and their translation equivalents

Többnyelvű terminuskivonatolás párhuzamos korpuszból – igei szerkezetek és fordításaik automatikus kivonatolásának módszere

Magyar Terminológia
Authors:
Tamás Váradi
and
Enikő Héja

Summary

The aim of this paper is to confirm that the methodology used to extract one-token translation candidates from parallel corpora can be extended for the purposes of retrieving multi-word verbal structures. The relevance of this technique from a terminological point of view is that it provides terminologists with empirical data and ample term-candidates, thus facilitating their work.

Verbal structures were retrieved from the parallel corpus in a semi-automatic way: a broader range of automatically recognized verbal structures were manually narrowed down to a smaller set of verbal structures that are relevant from a translation point of view. In the next step, every occurrence of the selected verbal structures was merged into a one-token unit in the parallel corpus, so that they could serve as input to the alignment algorithm. Finally, a core dictionary is obtained comprising multi-word verbal structures, their possible translations and the contexts in which they appear.

Restricted access

Abstract

The Word-in-Context corpus, which forms part of the SuperGLUE benchmark dataset, focuses on a specific sense disambiguation task: it has to be decided whether two occurrences of a given target word in two different contexts convey the same meaning or not. Unfortunately, the WiC database exhibits a relatively low consistency in terms of inter-annotator agreement, which implies that the meaning discrimination task is not well defined even for humans. The present paper aims at tackling this problem through anchoring semantic information to observable surface data. For doing so, we have experimented with a graph-based distributional approach, where both sparse and dense adjectival vector representations served as input. According to our expectations the algorithm is able to anchor the semantic information to contextual data, and therefore it is able to provide clear and explicit criteria as to when the same meaning should be assigned to the occurrences. Moreover, since this method does not rely on any external knowledge base, it should be suitable for any low- or medium-resourced language.

Open access