15,99 €
15,99 €
inkl. MwSt.
Sofort per Download lieferbar
payback
0 °P sammeln
15,99 €
15,99 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
0 °P sammeln
Als Download kaufen
15,99 €
inkl. MwSt.
Sofort per Download lieferbar
payback
0 °P sammeln
Jetzt verschenken
15,99 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
0 °P sammeln
  • Format: PDF

Seminar paper from the year 2021 in the subject Computer Sciences - Computational linguistics, grade: 1,3, University of Trier (Computerlinguistik und Digital Humanities), course: Mathematische Modellierung, language: English, abstract: In the field of multi-modal machine learning, where the fusion of various sensory inputs shapes learning paradigms, this paper provides an introduction to BERT-based pre-trained visio-linguistic models by specifically summarizing and analyzing two approaches: ViLBERT and VL-BERT, aiming to highlight and discuss their distinctive characteristics. The paper is…mehr

Produktbeschreibung
Seminar paper from the year 2021 in the subject Computer Sciences - Computational linguistics, grade: 1,3, University of Trier (Computerlinguistik und Digital Humanities), course: Mathematische Modellierung, language: English, abstract: In the field of multi-modal machine learning, where the fusion of various sensory inputs shapes learning paradigms, this paper provides an introduction to BERT-based pre-trained visio-linguistic models by specifically summarizing and analyzing two approaches: ViLBERT and VL-BERT, aiming to highlight and discuss their distinctive characteristics. The paper is structured into five chapters as follows. Chapter 2 lays the fundamental principles by introducing the characteristics of the Transformer encoder and BERT. Chapter 3 presents the selected visual-linguistic models, ViLBERT and VL-BERT. The objective of chapter 4 is to summarize and discuss both models. The paper concludes with an outlook in chapter 5. Transfer learning is a powerful technique in the field of deep learning. At first, a model is pre-trained on a specific task. Then fine-tuning is performed by taking the trained network as the basis of a new purpose-specific model to apply it on a separate task. In this way, transfer learning helps to reduce the need to develop new models for new tasks from scratch and hence saves time for training and verification. Nowadays, there are different such pre-trained models in computer vision, natural language processing (NLP) and recently for visio-linguistic tasks. The pre-trained models presented later in this paper are both based on and use BERT. BERT, which stands for Bidirectional Encoder Representations from Transformers, is a popular training technique for NLP, which is based on the architecture of a Transformer.

Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.