Atjaunināt sīkdatņu piekrišanu

E-grāmata: Multivariate Statistics: Classical Foundations and ModernMachine Learning

(University of Miami, U.S.A)
  • Formāts: 486 pages
  • Izdošanas datums: 31-Mar-2025
  • Izdevniecība: Chapman & Hall/CRC
  • Valoda: eng
  • ISBN-13: 9781040312056
  • Formāts - PDF+DRM
  • Cena: 62,60 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Bibliotēkām
  • Formāts: 486 pages
  • Izdošanas datums: 31-Mar-2025
  • Izdevniecība: Chapman & Hall/CRC
  • Valoda: eng
  • ISBN-13: 9781040312056

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

This book explores multivariate statistics from traditional and modern perspectives. It covers core topics like multivariate normality, MANOVA, and canonical correlation analysis, as well as modern concepts such as gradient boosting, random forests, variable importance, and causal inference.



This book explores multivariate statistics from both traditional and modern perspectives. The first section covers core topics like multivariate normality, MANOVA, discrimination, PCA, and canonical correlation analysis. The second section includes modern concepts such as gradient boosting, random forests, variable importance, and causal inference.

A key theme is leveraging classical multivariate statistics to explain advanced topics and prepare for contemporary methods. For example, linear models provide a foundation for understanding regu-larization with AIC and BIC, leading to a deeper analysis of regularization through generalization error and the VC theorem. Discriminant analysis introduces the weighted Bayes rule, which leads into modern classification techniques for class-imbalanced machine learning problems. Steepest descent serves as a precursor to matching pursuit and gradient boosting. Axis-aligned trees like CART, a classical tool, set the stage for more recent methods like super greedy trees.

Another central theme is training error. Introductory courses often caution that reducing training error too aggressively can lead to overfitting. At the same time, training error, also referred to as empirical risk, is a foundational concept in statistical learning theory. In regression, training error corresponds to the residual sum of squares, and minimizing it results in the least squares solution, which can lead to overfitting. Regardless of this concern, empirical risk plays a pivotal role in evaluating the potential for effective learning. The principle of empirical risk minimization demonstrates that minimizing training error can be advantageous when paired with regularization. This idea is further examined through techniques such as penalization, matching pursuit, gradient boosting, and super greedy tree constructions.

Key Features:

• Covers both classical and contemporary multivariate statistics.
• Each chapter includes a carefully selected set of exercises that vary in degree of difficulty and are both applied and theoretical.
• The book can also serve as a reference for researchers due to the diverse topics covered, including new material on super greedy trees, rule-based variable selection, and machine learning for causal inference.
• Extensive treatment on trees that provides a comprehensive and unified approach to understanding trees in terms of partitions and empirical risk minimization.
• New content on random forests, including random forest quantile classifiers for class-imbalanced problems, multivariate random forests, subsampling for confidence regions, super greedy forests. An entire chapter is dedicated to random survival forests, featuring new material on random hazard forests extending survival forests to time-varying covariates.

Preface
1. Introduction
2. Properties of Random Vectors and Background
Material
3. Multivariate Normal Distribution
4. Linear Regression
5.
Multivariate Regression
6. Discriminant Analysis and Classification
7.
Generalization Error
8. Principal Component Analysis
9. Canonical Correlation
Analysis
10. Newtons Method
11. Steepest Descent
12. Gradient Boosting
13.
Detailed Analysis of L2Boost
14. Coordinate Descent
15. Trees
16. Random
Forests
17. Random Forests Variable Selection
18. Splitting Effect on Random
Forests
19. Random Survival Forests
20. Causal Estimates using Machine
Learning
Dr. Hemant Ishwarans work focuses on advancing machine learning techniques for applications in public health, medicine, and informatics. His contributions include the development of open-source tools, such as R packages for his pioneering methods, including the widely-used random survival forestsa significant extension of the random forest algorithm in machine learning. His collaborations with healthcare experts have resulted in precision models for cardiovascular disease (CVD), heart transplantation, cancer staging, and resistance to gene cancer therapy.