An Introduction to Machine Learning (eBook, PDF) - Kubat, Miroslav
-9%
48,95 €
Statt 53,99 €**
48,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Gebundenes Buch)
Sofort per Download lieferbar
24 °P sammeln
-9%
48,95 €
Statt 53,99 €**
48,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Gebundenes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
24 °P sammeln
Als Download kaufen
Statt 53,99 €**
-9%
48,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Gebundenes Buch)
Sofort per Download lieferbar
24 °P sammeln
Jetzt verschenken
Statt 53,99 €**
-9%
48,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Gebundenes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
24 °P sammeln
  • Format: PDF


This textbook offers a comprehensive introduction to Machine Learning techniques and algorithms. This Third Edition covers newer approaches that have become highly topical, including deep learning, and auto-encoding, introductory information about temporal learning and hidden Markov models, and a much more detailed treatment of reinforcement learning. The book is written in an easy-to-understand manner with many examples and pictures, and with a lot of practical advice and discussions of simple applications.
The main topics include Bayesian classifiers, nearest-neighbor classifiers, linear
…mehr

Produktbeschreibung
This textbook offers a comprehensive introduction to Machine Learning techniques and algorithms. This Third Edition covers newer approaches that have become highly topical, including deep learning, and auto-encoding, introductory information about temporal learning and hidden Markov models, and a much more detailed treatment of reinforcement learning. The book is written in an easy-to-understand manner with many examples and pictures, and with a lot of practical advice and discussions of simple applications.

The main topics include Bayesian classifiers, nearest-neighbor classifiers, linear and polynomial classifiers, decision trees, rule-induction programs, artificial neural networks, support vector machines, boosting algorithms, unsupervised learning (including Kohonen networks and auto-encoding), deep learning, reinforcement learning, temporal learning (including long short-term memory), hidden Markov models, and the genetic algorithm. Special attention is devoted to performance evaluation, statistical assessment, and to many practical issues ranging from feature selection and feature construction to bias, context, multi-label domains, and the problem of imbalanced classes.


Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.

Autorenporträt
Miroslav Kubat, Associate Professor at the University of Miami, has been teaching and studying machine learning for over 25 years. He has published more than 100 peer-reviewed papers, co-edited two books, served on the program committees of over 60 conferences and workshops, and is an editorial board member of three scientific journals. He is widely credited with co-pioneering research in two major branches of the discipline: induction of time-varying concepts and learning from imbalanced training sets. He also contributed to research in induction from multi-label examples, induction of hierarchically organized classes, genetic algorithms, and initialization of neural networks. Professor Kubat is also known for his many practical applications of machine learning, ranging from oil-spill detection in radar images to text categorization to tumor segmentation in MR images.

Inhaltsangabe
1 A Simple Machine-Learning Task 1

1.1 Training Sets and Classifiers.......................................................................... 1

1.2 Minor Digression: Hill-Climbing Search....................................................... 5

1.3 Hill Climbing in Machine Learning................................................................ 9

1.4 The Induced Classifier's Performance........................................................ 12

1.5 Some Di culties with Available Data......................................................... 14

1.6 Summary and Historical Remarks............................................................... 18

1.7 Solidify Your Knowledge.............................................................................. 19

2 Probabilities: Bayesian Classifiers 22

2.1 The Single-Attribute Case............................................................................. 22

2.2 Vectors of Discrete Attributes..................................................................... 27

2.3 Probabilities of Rare Events: Exploiting the Expert's Intuition............. 29

2.4 How to Handle Continuous Attributes....................................................... 35

2.5 Gaussian "Bell" Function: A Standard pdf................................................. 38

2.6 Approximating PDFs with Sets of Gaussians............................................ 40

2.7 Summary and Historical Remarks............................................................... 43

2.8 Solidify Your Knowledge.............................................................................. 46

3 Similarities: Nearest-Neighbor Classifiers 49

3.1 The k-Nearest-Neighbor Rule...................................................................... 49



3.2 Measuring Similarity...................................................................................... 52

3.3 Irrelevant Attributes and Scaling Problems............................................... 56

3.4 Performance Considerations........................................................................ 60

3.5 Weighted Nearest Neighbors....................................................................... 63

3.6 Removing Dangerous Examples.................................................................. 65

3.7 Removing Redundant Examples.................................................................. 68

3.8 Summary and Historical Remarks............................................................... 71

3.9 Solidify Your Knowledge.............................................................................. 72






4 Inter-Class Boundaries:

Linear and Polynomial Classifiers 75

4.1 The Essence..................................................................................................... 75

4.2 The Additive Rule: Perceptron Learning.................................................... 79

4.3 The Multiplicative Rule: WINNOW............................................................ 85

4.4 Domains with More than Two Classes........................................................ 88

4.5 Polynomial Classifiers..................................................................................... 91

4.6 Specific Aspects of Polynomial Classifiers................................................... 93

4.7 Numerical Domains and Support Vector Machines................................... 97

4.8 Summary and Historical Remarks.............................................................. 100

4.9 Solidify Your Knowledge............................................................................. 101

5 Artificial Neural Networks 105

5.1 Multilayer Perceptrons as Classifiers.......................................................... 105

5.2 Neural Network's Error............................................................................... 110

5.3 Backpropagation of Error........................................................................... 111

5.4 Special Aspects of Multilayer Perceptrons................................................ 117

5.5 Architectural Issues...................................................................................... 121

5.6 Radial Basis Function Networks................................................................. 123

5.7 Summary and Historical Remarks.............................................................. 126

5.8 Solidify Your Knowledge............................................................................. 128

6 Decision Trees 130

6.1 Decision Trees

6.2 Induction of Decision Trees........................................................................ 134

6.3 How Much Information Does an Attribute Convey?............................... 137

6.4 Binary Split of a Numeric Attribute.......................................................... 142

6.5 Pruning.......................................................................................................... 144

6.6 Converting the Decision Tree into Rules.................................................. 149

6.7 Summary and Historical Remarks.............................................................. 151

6.8 Solidify Your Knowledge............................................................................. 153

7 Computational Learning Theory 157

7.1 PAC Learning................................................................................................. 157

7.2 Examples of PAC Learnability.................................................................... 161

7.3 Some Practical and Theoretical Consequences......................................... 164

7.4 VC-Dimension and Learnability................................................................. 166

7.5 Summary and Historical Remarks.............................................................. 169

7.6 Exercises and Thought Experiments......................................................... 170

8 A Few Instructive Applications 173

8.1 Character Recognition................................................................................ 173

8.2 Oil-Spill Recognition.................................................................................... 177

8.3 Sleep Classification...................................................................................... 181

8.4 Brain-Computer Interface.......................................................................... 185

8.5 Medical Diagnosis........................................................................................ 189

8.6 Text Classification........................................................................................ 192

8.7 Summary and Historical Remarks............................................................ 194

8.8 Exercises and Thought Experiments........................................................ 195

9 Induction of Voting Assemblies 198

9.1 Bagging.......................................................................................................... 198

9.2 Schapire's Boosting..................................................................................... 201

9.3 Adaboost: Practical Version of Boosting................................................. <205

9.4 Variations on the Boosting Theme........................................................... 210

9.5 Cost-Saving Benefits of the Approach...................................................... 213

9.6 Summary and Historical Remarks............................................................ 215

9.7 Solidify Your Knowledge............................................................................ 216

10 Some Practical Aspects to Know About 219

10.1 A Learner's Bias.......................................................................................... 219

10.2 Imbalanced Training Sets........................................................................... 223

10.3 Context-Dependent Domains..................................................................... 228

10.4 Unknown Attribute Values......................................................................... 231

10.5 Attribute Selection....................................................................................... 234

10.6 Miscellaneous............................................................................................... 237

10.7 Summary and Historical Remarks............................................................ 238

10.8 Solidify Your Knowledge............................................................................ 240

11 Performance Evaluation 243

11.1 Basic Performance Criteria........................................................................ 243

11.2 Precision and Recall.................................................................................... 247

11.3 Other Ways to Measure Performance..................................................... 252

11.4 Learning Curves and Computational Costs............................................. 255

11.5 Methodologies of Experimental Evaluation............................................. 258

11.6 Summary and Historical Remarks............................................................ 261

11.7 Solidify Your Knowledge............................................................................ 263

12 Statistical Significance 266

12.1 Sampling a Population................................................................................ 266

12.2 Benefiting from the Normal Distribution................................................ 271

12.3 Confidence Intervals................................................................................... 275

12.4 Statistical Evaluation of a Classifier.......................................................... 277

12.5 Another Kind of Statistical Evaluation..................................................... 280

12.6 Comparing Machine-Learning Techniques.............................................. 281

12.7 Summary and Historical Remarks............................................................ 284

12.8 Solidify Your Knowledge............................................................................ 285<

13 Induction in Multi-Label Domains 287

13.1 Classical Machine Learning in

Multi-Label Domains................................................................................... 287

13.2 Treating Each Class Separately:

Binary Relevance......................................................................................... 290

13.3 Classifier Chains........................................................................................... 293

13.4 Another Possibility: Stacking..................................................................... 296

13.5 A Note on Hierarchically Ordered Classes............................................... 298

13.6 Aggregating the Classes.............................................................................. 301

13.7 Criteria for Performance Evaluation........................................................ 304

13.8 Summary and Historical Remarks............................................................ 307

13.9 Solidify Your Knowledge............................................................................ 308

14 Unsupervised Learning 311

14.1 Cluster Analysis........................................................................................... 311

14.2 A Simple Algorithm: k-Means.................................................................... 315

14.3 More Advanced Versions of k-Means...................................................... 321

14.4 Hierarchical Aggregation............................................................................ 323

14.5 Self-Organizing Feature Maps: Introduction........................................... 326

14.6 Some Important Details.............................................................................. 329

14.7 Why Feature Maps?.................................................................................... 332

14.8 Summary and Historical Remarks............................................................ 334

14.9 Solidify Your Knowledge............................................................................ 335

15 Classifiers in the Form of Rulesets 338

15.1 A Class Described By Rules....................................................................... 338

15.2 Inducing Rulesets by Sequential Covering............................................... 341

15.3 Predicates and Recursion.......................................................................... 344

15.4 More Advanced Search Operators............................................................ 347

15.5 Summary and Historical Remarks.............................................................. 349

15.6 Solidify Your Knowledge............................................................................ 350

16 The Genetic Algorithm< 352<

16.1 The Baseline Genetic Algorithm................................................................ 352

16.2 Implementing the Individual Modules...................................................... 355

16.3 Why it Works............................................................................................... 359

16.4 The Danger of Premature Degeneration................................................. 362

16.5 Other Genetic Operators............................................................................ 364

16.6 Some Advanced Versions........................................................................... 367

16.7 Selections in k-NN Classifiers..................................................................... 370

16.8 Summary and Historical Remarks............................................................ 373

16.9 Solidify Your Knowledge............................................................................ 374

17 Reinforcement Learning 376

17.1 How to Choose the Most Rewarding Action........................................... 376

17.2 States and Actions in a Game.................................................................... 379

17.3 The SARSA Approach................................................................................. 383

17.4 Summary and Historical Remarks............................................................ 384

17.5 Solidify Your Knowledge............................................................................ 384

Index 395


Rezensionen
"The presentation is mainly empirical, but precise and pedagogical, as each concept introduced is followed by a set of questions which allows the reader to check immediately whether they understand the topic. Each chapter ends with a historical summary and a series of computer assignments. ... this book could serve as textbook for an undergraduate introductory course on machine learning ... ." (Gilles Teyssière, Mathematical Reviews, April, 2017)

"This book describes ongoing human-computer interaction (HCI) research and practical applications. ... These techniques can be very useful in AR/VR development projects, and some of these chapters can be used as examples and guides for future research." (Miguel A. Garcia-Ruiz, Computing Reviews, January, 2019)
"Miroslav Kubat's Introduction to Machine Learning is an excellent overview of a broad range of Machine Learning (ML) techniques. It fills a longstanding need for texts that cover the middle ground of neither oversimplifying nor too technical explanations of key concepts of key Machine Learning algorithms. ... All in all it is a very informative and instructive read which is well suited for undergraduate students and aspiring data scientists." (Holger K. von Joua, Google+, plus.google.com, December, 2016)

"It is superbly organized: each section includes a 'what have you learned' summary, and every chapter has a short summary, accompanying (brief) historical remarks, and a slew of exercises. ... In most of the chapters, there are very clear examples, well chosen and illustrated, that really help the reader understand each concept. ... I did learn quite a bit about very basic machine learning by reading this book." (Jacques Carette, Computing Reviews, January, 2016)