79,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
payback
40 °P sammeln
  • Gebundenes Buch

Advances in Domain Adaptation Theory gives current, state-of-the-art results on transfer learning, with a particular focus placed on domain adaptation from a theoretical point-of-view. The book begins with a brief overview of the most popular concepts used to provide generalization guarantees, including sections on Vapnik-Chervonenkis (VC), Rademacher, PAC-Bayesian, Robustness and Stability based bounds. In addition, the book explains domain adaptation problem and describes the four major families of theoretical results that exist in the literature, including the Divergence based bounds. Next,…mehr

Produktbeschreibung
Advances in Domain Adaptation Theory gives current, state-of-the-art results on transfer learning, with a particular focus placed on domain adaptation from a theoretical point-of-view. The book begins with a brief overview of the most popular concepts used to provide generalization guarantees, including sections on Vapnik-Chervonenkis (VC), Rademacher, PAC-Bayesian, Robustness and Stability based bounds. In addition, the book explains domain adaptation problem and describes the four major families of theoretical results that exist in the literature, including the Divergence based bounds. Next, PAC-Bayesian bounds are discussed, including the original PAC-Bayesian bounds for domain adaptation and their updated version.

Additional sections present generalization guarantees based on the robustness and stability properties of the learning algorithm.
Autorenporträt
Ievgen Redko is an associate professor at INSA in Lyon since 2016. He obtained his PhD in computer Science, specialized in Data Science in 2015.

Emilie Morvant is a Lecturer and a professor assistant at the Jean Monnet of Saint-Etienne University. She obtained her PhD in 2013 in Computer Science.
Rezensionen
"This book goes beyond the common assumption of supervised and semi-supervised learning that training and test data obey the same distribution. When the distribution changes, most statistical models must be reconstructed from new collected data that may be costly or even impossible to get for some applications. Therefore, it becomes necessary to develop approaches that reduce the need and the effort demanded for obtaining new labeled samples, by exploiting data available in related areas and using it further in similar fields. This has created a new family of machine learning algorithms, called transfer learning: a learning setting inspired by the capability of a human being to extrapolate knowledge across tasks to learn more efficiently. This book provides an overview of the state-of-the-art theoretical results in a specific - and arguably the most popular - subfield of transfer learning, called domain adaptation." --Mathematical Reviews Clippings