-15%
93,99 €
Statt 110,99 €**
93,99 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar
0 °P sammeln
-15%
93,99 €
Statt 110,99 €**
93,99 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
0 °P sammeln
Als Download kaufen
Statt 110,99 €**
-15%
93,99 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar
0 °P sammeln
Jetzt verschenken
Statt 110,99 €**
-15%
93,99 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
0 °P sammeln
  • Format: ePub


Modern analysis of HEP data needs advanced statistical tools to separate signal from background. This is the first book which focuses on machine learning techniques. It will be of interest to almost every high energy physicist, and, due to its coverage, suitable for students.

  • Geräte: eReader
  • mit Kopierschutz
  • eBook Hilfe
  • Größe: 10.3MB
Produktbeschreibung
Modern analysis of HEP data needs advanced statistical tools to separate signal from background. This is the first book which focuses on machine learning techniques. It will be of interest to almost every high energy physicist, and, due to its coverage, suitable for students.

Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, D ausgeliefert werden.

  • Produktdetails
  • Verlag: Wiley-VCH
  • Seitenzahl: 459
  • Erscheinungstermin: 24. Oktober 2013
  • Englisch
  • ISBN-13: 9783527677290
  • Artikelnr.: 39968114
Autorenporträt
The authors are experts in the use of statistics in particle physics data analysis. Frank C. Porter is Professor at Physics at the California Institute of Technology and has lectured extensively at CalTech, the SLAC Laboratory at Stanford, and elsewhere. Ilya Narsky is Senior Matlab Developer at The MathWorks, a leading developer of technical computing software for engineers and scientists, and the initiator of the StatPatternRecognition, a C++ package for statistical analysis of HEP data. Together, they have taught courses for graduate students and postdocs.
Inhaltsangabe
1 Why We Wrote This Book and How You Should Read It 2 Parametric Likelihood Fits 2.1 Preliminaries 2.2 Parametric Likelihood Fits 2.3 Fits for Small Statistics 2.4 Results Near the Boundary of a Physical Region 2.5 Likelihood Ratio Test for Presence of Signal 2.6 sPlots 2.7 Exercises 3 Goodness of Fit 3.1 Binned Goodness of Fit Tests 3.2 Statistics Converging to Chi-Square 3.3 Univariate Unbinned Goodness of Fit Tests 3.4 Multivariate Tests 3.5 Exercises 4 Resampling Techniques 4.1 Permutation Sampling 4.2 Bootstrap 4.3 Jackknife 4.4 BCa Confidence Intervals 4.5 Cross-Validation 4.6 _Resampling Weighted Observations 4.7 Exercises 5 Density Estimation 5.1 Empirical Density Estimate 5.2 Histograms 5.3 Kernel Estimation 5.4 Ideogram 5.5 Parametric vs. Nonparametric Density Estimation 5.6 Optimization 5.7 Estimating Errors 5.8 The Curse of Dimensionality 5.9 Adaptive Kernel Estimation 5.10 Naive Bayes Classification 5.11 Multivariate Kernel Estimation 5.12 Estimation Using Orthogonal Series 5.13 Using Monte Carlo Models 5.14 Unfolding 5.14.1 Unfolding: Regularization 6 Basic Concepts and Definitions of Machine Learning 6.1 Supervised
Unsupervised
and Semi-Supervised 6.2 Tall and Wide Data 6.3 Batch and Online Learning 6.4 Parallel Learning 6.5 Classification and Regression 7 Data Preprocessing 7.1 Categorical Variables 7.2 Missing Values 7.3 Outliers 7.4 Exercises 8 Linear Transformations and Dimensionality Reduction 8.1 Centering
Scaling
Reflection and Rotation 8.2 Rotation and Dimensionality Reduction 8.3 Principal Component Analysis (PCA) of Components 8.4 Independent Component Analysis (ICA) 8.4.1 Theory 8.5 Exercises 9 Introduction to Classification 9.1 Loss Functions: Hard Labels and Soft Scores 9.2 Bias
Variance
and Noise 9.3 Training
Validating and Testing: The Optimal Splitting Rule 9.4 Resampling Techniques: Cross-Validation and Bootstrap 9.5 Data with Unbalanced Classes 9.6 Learning with Cost 9.7 Exercises 10 Assessing Classifier Performance 10.1 Classification Error and Other Measures of Predictive Power 10.2 Receiver Operating Characteristic (ROC) and Other Curves 10.3 Testing Equivalence of Two Classification Models 10.4 Comparing Several Classifiers 10.5 Exercises 11 Linear and Quadratic Discriminant Analysis
Logistic Regression
and Partial Least Squares Regression 11.1 Discriminant Analysis 11.2 Logistic Regression 11.3 Classification by Linear Regression 11.4 Partial Least Squares Regression 11.5 Example: Linear Models for MAGIC Telescope Data 11.6 Choosing a Linear Classifier for Your Analysis 11.7 Exercises 12 Neural Networks 12.1 Perceptrons 12.2 The Feed-Forward Neural Network 12.3 Backpropagation 12.4 Bayes Neural Networks 12.5 Genetic Algorithms 12.6 Exercises 13 Local Learning and Kernel Expansion 13.1 From Input Variables to the Feature Space 13.2 Regularization 13.3 Making and Choosing Kernels 13.4 Radial Basis Functions 13.5 Support Vector Machines (SVM) 13.6 Empirical Local Methods 13.7 Kernel Methods: The Good
the Bad and the Curse of Dimensionality 13.8 Exercises 14 Decision Trees 14.1 Growing Trees 14.2 Predicting by Decision Trees 14.3 Stopping Rules 14.4 Pruning Trees 14.5 Trees for Multiple Classes 14.6 Splits on Categorical Variables 14.7 Surrogate Splits 14.8 Missing Values 14.9 Variable importance 14.10 Why Are Decision Trees Good (or Bad)? 14.11 Exercises 15 Ensemble Learning 15.1 Boosting 15.2 Diversifying theWeak Learner: Bagging
Random Subspace and Random Forest 15.3 Choosing an Ensemble for Your Analysis 15.4 Exercises 16 Reducing Multiclass to Binary 16.1 Encoding 16.2 Decoding 16.3 Summary: Choosing the Right Design 17 How to Choose the Right Classifier for Your Analysis and Apply It Correctly 17.1 Predictive Performance and Interpretability 17.2 Matching Classifiers and Variables 17.3 Using Classifier Predictions 17.4 Optimizing Accuracy 17.5 CPU and Memory Requirements 18 Methods for Variable Ranking and Selection 18.1 Definitions 18.2 Variable Ranking Elimination (SBE)
and Feature-based Sensitivity of Posterior Probabilities (FSPP) 18.3 Variable Selection (BECM) 18.4 Exercises 19 Bump Hunting in Multivariate Data 19.1 Voronoi Tessellation and SLEUTH Algorithm 19.2 Identifying Box Regions by PRIM and Other Algorithms 19.3 Bump Hunting Through Supervised Learning 20 Software Packages for Machine Learning 20.1 Tools Developed in HEP 20.2 R 20.3 MATLAB 20.4 Tools for Java and Python 20.5 What Software Tool Is Right for You? Appendix A: Optimization Algorithms A.1 Line Search A.2 Linear Programming (LP)