A companion to the author's Probability and Statistics for Computer Science, this book picks up where the earlier book left off (but also supplies a summary of probability that the reader can use). Emphasizing the usefulness of standard machinery from applied statistics, this textbook gives an overview of the major applied areas in learning, including coverage of: - classification using standard machinery (naive bayes; nearest neighbor; SVM)- clustering and vector quantization (largely as in PSCS)- PCA (largely as in PSCS)- variants of PCA (NIPALS; latent semantic analysis; canonical correlation analysis)- linear regression (largely as in PSCS)- generalized linear models including logistic regression- model selection with Lasso, elasticnet- robustness and m-estimators- Markov chains and HMM's (largely as in PSCS)- EM in fairly gory detail; long experience teaching this suggests one detailed example is required, which students hate; but once they've been through that, the next one is easy- simple graphical models (in the variational inference section)- classification with neural networks, with a particular emphasis onimage classification- autoencoding with neural networks- structure learning
A companion to the author's Probability and Statistics for Computer Science, this book picks up where the earlier book left off (but also supplies a summary of probability that the reader can use). Emphasizing the usefulness of standard machinery from applied statistics, this textbook gives an overview of the major applied areas in learning, including coverage of: - classification using standard machinery (naive bayes; nearest neighbor; SVM)- clustering and vector quantization (largely as in PSCS)- PCA (largely as in PSCS)- variants of PCA (NIPALS; latent semantic analysis; canonical correlation analysis)- linear regression (largely as in PSCS)- generalized linear models including logistic regression- model selection with Lasso, elasticnet- robustness and m-estimators- Markov chains and HMM's (largely as in PSCS)- EM in fairly gory detail; long experience teaching this suggests one detailed example is required, which students hate; but once they've been through that, the next one is easy- simple graphical models (in the variational inference section)- classification with neural networks, with a particular emphasis onimage classification- autoencoding with neural networks- structure learning
Hardcover
$119.99