Using Comparable Corpora for Under-Resourced Areas of Machine Translation
97,99 €
versandkostenfrei*

inkl. MwSt.
Sofort lieferbar
49 °P sammeln
    Gebundenes Buch

This book provides an overview of how comparable corpora can be used to overcome the lack of parallel resources when building machine translation systems for under-resourced languages and domains. It presents a wealth of methods and open tools for building comparable corpora from the Web, evaluating comparability and extracting parallel data that can be used for the machine translation task. It is divided into several sections, each covering a specific task such as building, processing, and using comparable corpora, focusing particularly on under-resourced language pairs and domains.
The
…mehr

Produktbeschreibung
This book provides an overview of how comparable corpora can be used to overcome the lack of parallel resources when building machine translation systems for under-resourced languages and domains. It presents a wealth of methods and open tools for building comparable corpora from the Web, evaluating comparability and extracting parallel data that can be used for the machine translation task. It is divided into several sections, each covering a specific task such as building, processing, and using comparable corpora, focusing particularly on under-resourced language pairs and domains.

The book is intended for anyone interested in data-driven machine translation for under-resourced languages and domains, especially for developers of machine translation systems, computational linguists and language workers. It offers a valuable resource for specialists and students in natural language processing, machine translation, corpus linguistics and computer-assisted translation, and promotes the broader use of comparable corpora in natural language processing and computational linguistics.

  • Produktdetails
  • Theory and Applications of Natural Language Processing
  • Verlag: Springer / Springer, Berlin
  • Artikelnr. des Verlages: 978-3-319-99003-3
  • 1st ed. 2019
  • Erscheinungstermin: März 2019
  • Englisch
  • Abmessung: 241mm x 160mm x 23mm
  • Gewicht: 663g
  • ISBN-13: 9783319990033
  • ISBN-10: 3319990039
  • Artikelnr.: 53160604
Autorenporträt
Prof. Inguna Skadi a has been working on language technologies for over 25 years. Her research interests are in machine translation, human-computer interaction, and language resources and tools for under-resourced languages. She has coordinated and participated in many national and international projects related to human language technologies, and has authored or co-authored more than 60 peer-reviewed research papers.

Bogdan Babych is an Associate Professor of Translation Studies at the University of Leeds, UK. He holds a PhD in machine translation and in Ukrainian linguistics. Dr. Babych was a coordinator of the EU FP7 Marie Curie project HyghTra, and received a Leverhulme Early Career Fellowship for his project Translation Strategies in Comparable Corpora. He previously worked as a computational linguist at L&H Speech Products, Belgium.

Robert Gaizauskas is a Professor of Computer Science and head of the Natural Language Processing group, Department of Computer Science, University of Sheffield, UK. His research interests are in computational semantics, information extraction, text summarization and machine translation. He holds a DPhil from the University of Sussex, UK (1992), and has published more than 150 papers in peer-reviewed journals and conference proceedings.

Nikola Ljubesic is an Assistant Professor at the Department of Information Science, University of Zagreb, Croatia, and researcher at the "Jozef Stefan" Institute in Ljubljana, Slovenia. His main research interests are in language technologies for South Slavic languages, linguistic processing of non-standard texts, author profiling and social media analytics.

Prof. Dan Tufi , director of RACAI and full member of the Romanian Academy, has been active in computational and corpus linguistics for more than 30 years. His expertise is in tagging, word alignment, multilingual WSD, SMT, QA in open domains, lexical ontologies, language resource annotation and encoding. He has authored or co-authored more than 250 peer-reviewed papers, book chapters and books.

Andrejs Vasi jevs is a co-founder and chairman of the board of Tilde, a leading European language technology and localization company. His expertise is in terminology management, machine translation and human computer interaction. He initiated and coordinated the ACCURAT project as well as several other international research and innovation projects. He holds a PhD in computer sciences from the University of Latvia and a Dr.h. from the Latvian Academy of Sciences.

Inhaltsangabe
1 Introduction 2 Cross-language comparability and its applications for MT
2.1 Introduction: Definition and use of the concept of comparability 2.2 Development and calibration of comparability metrics on parallel corpora 2.2.1 Application of corpus comparability: Selecting coherent parallel corpora for domain-specific MT training 2.2.2 Methodology 2.2.2.1 Description of calculation method 2.2.2.2 Symmetric vs. asymmetric calculation of distance 2.2.2.3 Calibrating the distance metric 2.2.3 Validation of the scores: cross-language agreement for source vs. target sides of TMX files 2.2.4 Discussion 2.3 Exploration of comparability features in document-aligned comparable corpora: Wikipedia 2.3.1 Overview: Wikipedia as a source of comparable corpora 2.3.2 Previous work on using Wikipedia as a linguistic resource 2.3.3 Methodology 2.3.3.1 Document pre-processing 2.3.3.2 Similarity measures 2.3.3.3 Eliciting human judgments 2.3.4 Results and analysis 2.3.4.1 Responses to the questionnaire 2.3.4.2 Inter-assessor agreement 2.3.4.3 Correlation of similarity measures to human judgments 2.3.4.4 Classification task 2.3.5 Discussion 2.3.5.1 Features of 'Similar' articles 2.3.5.2 Measuring cross-language similarity 2.3.6 Section conclusions 2.4 Metrics for identifying comparability levels in non-aligned documents 2.4.1 Using parallel and comparable corpora for MT 2.4.2 Related work 2.4.3 Comparability metrics 2.4.3.1 Lexical mapping based metric 2.4.3.2 Keyword based metric 2.4.3.3 Machine translation based metrics 2.4.4 Experiments and evaluation 2.4.4.1 Data sources 2.4.4.2 Experimental results 2.4.5 Metric application to equivalent extraction 2.4.6 Discussion 2.4.6.1 Advantages and disadvantages of the metrics 2.4.6.2 Using semi-parallel equivalents in MT systems 2.4.7 Conclusion 3 Collecting comparable corpora 3.1 Introduction 3.2 Previous work in collecting comparable corpora 3.2.1 Web crawling 3.2.2 Identifying comparable text 3.3 ACCURAT techniques to collect comparable documents 3.3.1 Comparable corpora collection from Wikipedia 3.3.1.1 Extracting comparable articles 3.3.1.2 Measuring similarity in inter-language linked documents 3.3.2 Comparable corpora collection from news articles 3.3.3 Comparable corpora collection from narrow domains 3.3.3.1 Acquiring comparable documents 3.3.3.2 Aligning comparable document pairs 4 Extracting data from comparable corpora 4.1 Introduction 4.2 Term extraction, tagging, and mapping for under-resourced languages 4.2.1 Related work 4.2.2 Term Extraction, tagging, and mapping with the ACCURAT toolkit 4.2.2.1 Term candidate extraction with CollTerm 4.2.2.1.1 Linguistic filtering 4.2.2.1.2 Minimum frequency filter 4.2.2.1.3 Statistical ranking 4.2.2.1.4 Cut-off method 4.2.2.2 Term tagging in documents 4.2.2.3.1 Term tagging evaluation for Latvian and Lithuanian 4.2.2.3.2 Term tagging evaluation for Croatian 4.2.2.3 Term mapping 4.2.2.4 Comparable corpus term mapping task 4.2.2.5 Discussion 4.2.3 Experiments with English and Romanian term extraction 4.2.3.1 Single-word term extraction 4.2.3.2 Multi-word term extraction 4.2.3.3 Experiments and results 4.2.4 Multi-word term extraction and context-based mapping for English-Slovene 4.2.4.1 Resources and tools used 4.2.4.1.1 Comparable corpus 4.2.4.1.2 Seed lexicon 4.2.4.1.3 LUIZ 4.2.4.1.4 ccExtractor 4.2.4.2 Experimental setup 4.2.4.2.1 Term extraction 4.2.4.2.2 Term mapping 4.2.4.2.3 Extension of the Seed lexicon 4.2.4.3 Evaluation of the results 4.2.4.3.1 Evaluation of term extraction 4.2.4.3.2 Evaluation of term mapping 4.2.4.4 Discussion 4.3 Named entity recognition using TildeNER 4.3.2 Annotated corpora 4.3.3 System design 4.3.3.1 Feature function selection 4.3.3.2 Data pre-processing 4.3.3.3 NER model bootstrapping 4.3.3.4 Refinement methods 4.3.4 Evaluation 4.3.4.1 Non-comparative evaluation 4.3.4.2 Experimental comparative evaluation 4.3.5 Discussion 4.4 Lexica extraction 4.4.1 Related work 4.4.2 Experiments on bilingual lexicon extraction 4.4.2.1