Atjaunināt sīkdatņu piekrišanu

E-grāmata: Inference and Learning from Data: Volume 2: Inference

(École Polytechnique Fédérale de Lausanne)
  • Formāts: PDF+DRM
  • Izdošanas datums: 22-Dec-2022
  • Izdevniecība: Cambridge University Press
  • Valoda: eng
  • ISBN-13: 9781009218252
  • Formāts - PDF+DRM
  • Cena: 89,21 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: PDF+DRM
  • Izdošanas datums: 22-Dec-2022
  • Izdevniecība: Cambridge University Press
  • Valoda: eng
  • ISBN-13: 9781009218252

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

Written in an engaging and rigorous style by a world authority in the field, this is an accessible and comprehensive introduction to techniques for inferring unknown variables and quantities. With downloadable Matlab code and solutions for instructors, this is the ideal introduction for students of data science, machine learning and engineering.

This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This second volume, Inference, builds on the foundational topics established in volume I to introduce students to techniques for inferring unknown variables and quantities, including Bayesian inference, Monte Carlo Markov Chain methods, maximum-likelihood estimation, hidden Markov models, Bayesian networks, and reinforcement learning. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 350 end-of-chapter problems (including solutions for instructors), 180 solved examples, almost 200 figures, datasets and downloadable Matlab code. Supported by sister volumes Foundations and Learning, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.

Recenzijas

'Inference and Learning from Data is a uniquely comprehensive introduction to the signal processing foundations of modern data science. Lucidly written, with a carefully balanced choice of topics, this textbook is an indispensable resource for both graduate students and data science practitioners, a piece of lasting value.' Helmut Bölcskei, ETH Zurich 'This textbook provides a lucid and magisterial treatment of methods for inference and learning from data, aided by hundreds of solved examples, computer simulations, and over 1000 problems. The material ranges from fundamentals to recent advances in statistical learning theory; variational inference; neural, convolutional, and Bayesian networks; and several other topics. It is aimed at students and practitioners, and can be used for several different introductory and advanced courses.' Thomas Kailath, Stanford University 'A tour de force comprehensive three-volume set for the fast-developing areas of data science, machine learning, and statistical signal processing. With masterful clarity and depth, Sayed covers, connects, and integrates background fundamentals and classical and emerging methods in inference and learning. The books are rich in worked-out examples, exercises, and links to data sets. Commentaries with historical background and contexts for the topics covered in each chapter are a special feature.' Mostafa Kaveh, University of Minnesota 'This is the first of a three-volume series covering from fundamentals to the many various methods in inference and learning from data. Professor Sayed is a prolific author of award-winning books and research papers who has himself contributed significantly to many of the topics included in the series. With his encyclopedic knowledge, his careful attention to detail, and in a very approachable style, this first volume covers the basics of matrix theory, probability and stochastic processes, convex and non-convex optimization, gradient-descent, convergence analysis, and several other advanced topics that will be needed for volume II (Inference) and volume III (Learning). This series, and in particular this volume, will be a must-have for educators, students, researchers, and technologists alike who are pursuing a systematic study, want a quick refresh, or may use it as a helpful reference to learn about these fundamentals.' Jose Moura, Carnegie Mellon University 'Volume I of Inference and Learning from Data provides a foundational treatment of one of the most topical aspects of contemporary signal and information processing, written by one of the most talented expositors in the field. It is a valuable resource both as a textbook for students wishing to enter the field and as a reference work for practicing engineers.' Vincent Poor, Princeton University 'Inference and Learning from Data, Vol. I: Foundations offers an insightful and well-integrated primer with just the right balance of everything that new graduate students need to put their research on a solid footing. It covers foundations in a modern way - emphasizing the most useful concepts, including proofs, and timely topics which are often missing from introductory graduate texts. All in one beautifully written textbook. An impressive feat! I highly recommend it.' Nikolaos Sidiropoulos, University of Virginia 'This exceptional encyclopedic work on learning from data will be the bible of the field for many years to come. Totaling more than 3000 pages, this three-volume book covers in an exhaustive and timely manner the topic of data science, which has become critically important to many areas and lies at the basis of modern signal processing, machine learning, artificial intelligence, and their numerous applications. Written by an authority in the field, the book is really unique in scale and breadth, and it will be an invaluable source of information for students, researchers, and practitioners alike.' Peter Stoica, Uppsala University 'Very meticulous, thorough, and timely. This volume is largely focused on optimization, which is so important in the modern-day world of data science, signal processing, and machine learning. The book is classical and modern at the same time - many classical topics are nicely linked to modern topics of current interest. All the necessary mathematical background is covered. Professor Sayed is one of the foremost researchers and educators in the field and the writing style is unhurried and clear with many examples, truly reflecting the towering scholar that he is. This volume is so complete that it can be used for self-study, as a classroom text, and as a timeless research reference.' P. P. Vaidyanathan, Caltech 'The book series is timely and indispensable. It is a unique companion for graduate students and early-career researchers. The three volumes provide an extraordinary breadth and depth of techniques and tools, and encapsulate the experience and expertise of a world-class expert in the field. The pedagogically crafted text is written lucidly, yet never compromises rigor. Theoretical concepts are enhanced with illustrative figures, well-thought problems, intuitive examples, datasets, and MATLAB codes that reinforce readers' learning.' Abdelhak Zoubir, TU Darmstadt

Papildus informācija

Discover techniques for inferring unknown variables and quantities with the second volume of this extraordinary three-volume set.
VOLUME II INFERENCE
Preface xxvii
P.1 Emphasis on Foundations xxvii
P.2 Glimpse of History xxix
P.3 Organization of the Text xxxi
P.4 How to Use the Text xxxiv
P.5 Simulation Datasets xxxvii
P.6 Acknowledgments xl
Notation xlv
27 Mean-Square-Error Inference
1053(39)
27.1 Inference without Observations
1054(3)
27.2 Inference with Observations
1057(3)
27.3 Gaussian Random Variables
1060(12)
27.4 Bias-Variance Relation
1072(10)
27.5 Commentaries and Discussion
1082(6)
Problems
1085(3)
27.A Circular Gaussian Distribution
1088(4)
References
1090(2)
28 Bayesian Inference
1092(29)
28.1 Bayesian Formulation
1092(2)
28.2 Maximum A-Posteriori Inference
1094(3)
28.3 Bayes Classifier
1097(9)
28.4 Logistic Regression Inference
1106(4)
28.5 Discriminative and Generative Models
1110(3)
28.6 Commentaries and Discussion
1113(8)
Problems
1116(3)
References
1119(2)
29 Linear Regression
1121(33)
29.1 Regression Model
1121(7)
29.2 Centering and Augmentation
1128(3)
29.3 Vector Estimation
1131(3)
29.4 Linear Models
1134(2)
29.5 Data Fusion
1136(3)
29.6 Minimum-Variance Unbiased Estimation
1139(4)
29.7 Commentaries and Discussion
1143(8)
Problems
1145(6)
29.A Consistency of Normal Equations
1151(3)
References
1153(1)
30 Kalman Filter
1154(57)
30.1 Uncorrected Observations
1154(3)
30.2 Innovations Process
1157(2)
30.3 State-Space Model
1159(12)
30.4 Measurement - and Time-Update Forms
1171(6)
30.5 Steady-State Filter
1177(4)
30.6 Smoothing Filters
1181(4)
30.7 Ensemble Kalman Filter
1185(6)
30.8 Nonlinear Filtering
1191(10)
30.9 Commentaries and Discussion
1201(10)
Problems
1204(4)
References
1208(3)
31 Maximum Likelihood
1211(65)
31.1 Problem Formulation
1211(3)
31.2 Gaussian Distribution
1214(9)
31.3 Multinomial Distribution
1223(3)
31.4 Exponential Family of Distributions
1226(3)
31.5 Cramer Rao Lower Bound
1229(8)
31.6 Model Selection
1237(14)
31.7 Commentaries and Discussion
1251(14)
Problem
1259(6)
31.A Derivation of the Cramer-Rao Bound
1265(1)
31.B Derivation of the AIC Formulation
1266(5)
31.C Derivation of the BIC Formulation
1271(5)
References
1273(3)
32 Expectation Maximization
1276(43)
32.1 Motivation
1276(6)
32.2 Derivation of the EM Algorithm
1282(5)
32.3 Gaussian Mixture Models
1287(15)
32.4 Bernoulli Mixture Models
1302(6)
32.5 Commentaries and Discussion
1308(4)
Problems
1310(2)
32.A Exponential Mixture Models
1312(7)
References
1316(3)
33 Predictive Modeling
1319(33)
33.1 Posterior Distributions
1320(8)
33.2 Laplace Method
1328(5)
33.3 Markov Chain Monte Carlo Method
1333(13)
33.4 Commentaries and Discussion
1346(6)
Problems
1348(1)
References
1349(3)
34 Expectation Propagation
1352(28)
34.1 Factored Representation
1352(5)
34.2 Gaussian Sites
1357(14)
34.3 Exponential Sites
1371(4)
34.4 Assumed Density Filtering
1375(3)
34.5 Commentaries and Discussion
1378(2)
Problems
1378(1)
References
1379(1)
35 Particle Filters
1380(25)
35.1 Data Model
1380(5)
35.2 Importance Sampling
1385(8)
35.3 Particle Filter Implementations
1393(7)
35.4 Commentaries and Discussion
1400(5)
Problems
1401(2)
References
1403(2)
36 Variational Inference
1405(67)
36.1 Evaluating Evidences
1405(6)
36.2 Evaluating Posterior Distributions
1411(2)
36.3 Mean-Field Approximation
1413(27)
36.4 Exponential Conjugate Models
1440(14)
36.5 Maximizing the ELBO
1454(4)
36.6 Stochastic Gradient Solution
1458(3)
36.7 Black Box Inference
1461(6)
36.8 Commentaries and Discussion
1467(5)
Problems
1467(3)
References
1470(2)
37 Latent Dirichlet Allocation
1472(45)
37.1 Generative Model
1473(9)
37.2 Coordinate-Ascent Solution
1482(11)
37.3 Maximizing the ELBO
1493(7)
37.4 Estimating Model Parameters
1500(14)
37.5 Commentaries and Discussion
1514(3)
Problems
1515(1)
References
1515(2)
38 Hidden Markov Models
1517(46)
38.1 Gaussian Mixture Models
1517(5)
38.2 Markov Chains
1522(16)
38.3 Forward-Backward Recursions
1538(9)
38.4 Validation and Prediction Tasks
1547(4)
38.5 Commentaries and Discussion
1551(12)
Problems
1557(3)
References
1560(3)
39 Decoding Hidden Markov Models
1563(46)
39.1 Decoding States
1563(2)
39.2 Decoding Transition Probabilities
1565(4)
39.3 Normalization and Scaling
1569(5)
39.4 Viterbi Algorithm
1574(12)
39.5 EM Algorithm for Dependent Observations
1586(18)
39.6 Commentaries and Discussion
1604(5)
Problems
1605(2)
References
1607(2)
40 Independent Component Analysis
1609(34)
40.1 Problem Formulation
1610(7)
40.2 Maximum-Likelihood Formulation
1617(5)
40.3 Mutual Information Formulation
1622(5)
40.4 Maximum Kurtosis Formulation
1627(7)
40.5 Projection Pursuit
1634(3)
40.6 Commentaries and Discussion
1637(6)
Problems
1638(2)
References
1640(3)
41 Bayesian Networks
1643(39)
41.1 Curse of Dimensionality
1644(3)
41.2 Probabilistic Graphical Models
1647(23)
41.3 Active and Blocked Pathways 1G61
41.4 Conditional Independence Relations
1670(7)
41.5 Commentaries and Discussion
1677(5)
Problems
1679(1)
References
1680(2)
42 Inference over Graphs
1682(58)
42.1 Probabilistic Inference
1682(3)
42.2 Inference by Enumeration
1685(6)
42.3 Inference by Variable Elimination
1691(7)
42.4 Chow-Liu Algorithm
1698(7)
42.5 Graphical LASSO
1705(6)
42.6 Learning Graph Parameters
1711(22)
42.7 Commentaries and Discussion
1733(7)
Problems
1735(2)
References
1737(3)
43 Undirected Graphs
1740(67)
43.1 Cliques and Potentials
1740(12)
43.2 Representation Theorem
1752(4)
43.3 Factor Graphs
1756(5)
43.4 Message-Passing Algorithms
1761(32)
43.5 Commentaries and Discussion
1793(6)
Problems
1796(3)
43.A Proof of the Hammersley Clifford Theorem
1799(4)
43.B Equivalence of Markovian Properties
1803(4)
References
1804(3)
44 Markov Decision Processes
1807(46)
44.1 MDP Model
1807(14)
44.2 Discounted Rewards
1821(4)
44.3 Policy Evaluation
1825(15)
44.4 Linear Function Approximation
1840(8)
44.5 Commentaries and Discussion
1848(5)
Problems
1850(1)
References
1851(2)
45 Value and Policy Iterations
1853(64)
45.1 Value Iteration
1853(13)
45.2 Policy Iteration
1866(13)
45.3 Partially Observable MDP
1879(14)
45.4 Commentaries and Discussion
1893(10)
Problems
1900(3)
45.A Optimal Policy and State Action Values
1903(2)
45.B Convergence of Value Iteration
1905(1)
45.C Proof of e-Optimality
1906(1)
45.D Convergence of Policy Iteration
1907(2)
45.E Piecewise Linear Property
1909(1)
45.F Bellman Principle of Optimality
1910(7)
References
1914(3)
46 Temporal Difference Learning
1917(54)
46.1 Model-Based Learning
1918(2)
46.2 Monte Carlo Policy Evaluation
1920(8)
46.3 TD(0) Algorithm
1928(8)
46.4 Look-Ahead TD Algorithm
1936(4)
46.5 TD(λ) Algorithm
1940(9)
46.6 True Online TD(λ) Algorithm
1949(3)
46.7 Off-Policy Learning
1952(5)
46.8 Commentaries and Discussion
1957(2)
Problems
1958(1)
46.A Useful Convergence Result
1959(1)
46.B Convergence of TD(0) Algorithm
1960(3)
46.C Convergence of TD(λ) Algorithm
1963(4)
46.D Equivalence of Offline Implementations
1967(4)
References
1969(2)
47 Q-Learning
1971(37)
47.1 SARSA(O) Algorithm
1971(4)
47.2 Look-Ahead SARSA Algorithm
1975(2)
47.3 SARSA(λ) Algorithm
1977(2)
47.4 Off-Policy Learning
1979(1)
47.5 Optimal Policy Extraction
1980(2)
47.6 Q-Learning Algorithm
1982(3)
47.7 Exploration versus Exploitation
1985(8)
47.8 Q-Learning with Replay Buffer
1993(1)
47.9 Double Q-Leaming
1994(5)
47.10 Commentaries and Discussion 1996, Problems
1999(2)
47.A Convergence of SARSA(O) Algorithm
2001(2)
47.B Convergence of Q-Learning Algorithm
2003(5)
References
2005(3)
48 Value Function Approximation
2008(39)
48.1 Stochastic Gradient TD-Learning
2008(10)
48.2 Least-Squares TD-Learning
2018(1)
48.3 Projected Bellman Learning
2019(7)
48.4 SARSA Methods
2026(6)
48.5 Deep Q-Learning
2032(9)
48.6 Commentaries and Discussion
2041(6)
Problems
2043(2)
References
2045(2)
49 Policy Gradient Methods
2047(74)
49.1 Policy Model
2047(1)
49.2 Finite-Difference Method
2048(2)
49.3 Score Function
2050(2)
49.4 Objective Functions
2052(5)
49.5 Policy Gradient Theorem
2057(2)
49.6 Actor-Critic Algorithms
2059(12)
49.7 Natural Gradient Policy
2071(3)
49.8 Trust Region Policy Optimization
2074(19)
49.9 Deep Reinforcement Learning
2093(5)
49.10 Soft Learning
2098(8)
49.11 Commentaries and Discussion
2106(7)
Problems
2109(4)
49.A Proof of Policy Gradient Theorem
2113(4)
49.B Proof of Consistency Theorem
2117(4)
References
2118(3)
Author Index 2121(24)
Subject Index 2145
Ali H. Sayed is Professor and Dean of Engineering at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. He has also served as Distinguished Professor and Chairman of Electrical Engineering at the University of California, Los Angeles, USA, and as President of the IEEE Signal Processing Society. He is a member of the US National Academy of Engineering (NAE) and The World Academy of Sciences (TWAS), and a recipient of the 2022 IEEE Fourier Award and the 2020 IEEE Norbert Wiener Society Award. He is a Fellow of the IEEE.