Statt 15,95 €**
13,99 €
**Preis der gedruckten Ausgabe (Broschiertes Buch)

inkl. MwSt. und vom Verlag festgesetzt.
Sofort per Download lieferbar
payback
0 °P sammeln
  • Format: PDF

Seminar paper from the year 2008 in the subject Speech Science / Linguistics, University of Münster (Englisches Seminar), course: Computational Text Analysis, language: English, abstract: In this paper I will show how it is possible to compute the type/token ratio of a text by using the programming language Perl. First of all an overview about the topic will be given, including the definition of both the terms type and token and how they are used in the context of this program. Then I will explain how the program works and give further rationale for its shortcomings. Although the program is…mehr

Produktbeschreibung
Seminar paper from the year 2008 in the subject Speech Science / Linguistics, University of Münster (Englisches Seminar), course: Computational Text Analysis, language: English, abstract: In this paper I will show how it is possible to compute the type/token ratio of a text by using the programming language Perl. First of all an overview about the topic will be given, including the definition of both the terms type and token and how they are used in the context of this program. Then I will explain how the program works and give further rationale for its shortcomings. Although the program is rather simple, some knowledge of the programming language Perl will be needed for the respective parts in this paper. Then I will proceed to do a short analysis of different texts and their respective type/token ratios. These texts were taken from the British National Corpus and Project Gutenberg. The results will show the need for a different measure of lexical density. One example of such a measure is the mean type/token ratio which I will go into shortly. In the Conclusion there will be a short critique of the expressiveness of type/token ratios as well as a short overview about current research on this topic.