Multi-Armed Bandits (eBook, PDF)
Qing Zhao
eBook, PDF

Multi-Armed Bandits (eBook, PDF)

Theory and Applications to Online Learning in Networks

Versandkostenfrei!
Sofort per Download lieferbar
Statt: 58,84 €**
44,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Alle Infos zum eBook verschenken
Weitere Ausgaben:
PAYBACK Punkte
22 °P sammeln!
Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools-Bayesian and frequentist-of ...

Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.