Atjaunināt sīkdatņu piekrišanu

Inference and Learning from Data: Volume 3: Learning [Hardback]

(École Polytechnique Fédérale de Lausanne)
  • Formāts: Hardback, 990 pages, height x width x depth: 255x180x40 mm, weight: 1760 g, Worked examples or Exercises
  • Izdošanas datums: 22-Dec-2022
  • Izdevniecība: Cambridge University Press
  • ISBN-10: 100921828X
  • ISBN-13: 9781009218283
  • Hardback
  • Cena: 104,13 €
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Formāts: Hardback, 990 pages, height x width x depth: 255x180x40 mm, weight: 1760 g, Worked examples or Exercises
  • Izdošanas datums: 22-Dec-2022
  • Izdevniecība: Cambridge University Press
  • ISBN-10: 100921828X
  • ISBN-13: 9781009218283
Written in an engaging and rigorous style by a world authority in the field, this is an accessible and comprehensive introduction to learning methods. With downloadable Matlab code and solutions for instructors, this is the ideal introduction for students of data science, machine learning and engineering.

This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This final volume, Learning, builds on the foundational topics established in volume I to provide a thorough introduction to learning methods, addressing techniques such as least-squares methods, regularization, online learning, kernel methods, feedforward and recurrent neural networks, meta-learning, and adversarial attacks. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 350 end-of-chapter problems (including complete solutions for instructors), 280 figures, 100 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Foundations and Inference, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, data and inference.

Recenzijas

'Inference and Learning from Data is a uniquely comprehensive introduction to the signal processing foundations of modern data science. Lucidly written, with a carefully balanced choice of topics, this textbook is an indispensable resource for both graduate students and data science practitioners, a piece of lasting value.' Helmut Bölcskei, ETH Zurich 'This textbook provides a lucid and magisterial treatment of methods for inference and learning from data, aided by hundreds of solved examples, computer simulations, and over 1000 problems. The material ranges from fundamentals to recent advances in statistical learning theory; variational inference; neural, convolutional, and Bayesian networks; and several other topics. It is aimed at students and practitioners, and can be used for several different introductory and advanced courses.' Thomas Kailath, Stanford University 'A tour de force comprehensive three-volume set for the fast-developing areas of data science, machine learning, and statistical signal processing. With masterful clarity and depth, Sayed covers, connects, and integrates background fundamentals and classical and emerging methods in inference and learning. The books are rich in worked-out examples, exercises, and links to data sets. Commentaries with historical background and contexts for the topics covered in each chapter are a special feature.' Mostafa Kaveh, University of Minnesota 'This is the first of a three-volume series covering from fundamentals to the many various methods in inference and learning from data. Professor Sayed is a prolific author of award-winning books and research papers who has himself contributed significantly to many of the topics included in the series. With his encyclopedic knowledge, his careful attention to detail, and in a very approachable style, this first volume covers the basics of matrix theory, probability and stochastic processes, convex and non-convex optimization, gradient-descent, convergence analysis, and several other advanced topics that will be needed for volume II (Inference) and volume III (Learning). This series, and in particular this volume, will be a must-have for educators, students, researchers, and technologists alike who are pursuing a systematic study, want a quick refresh, or may use it as a helpful reference to learn about these fundamentals.' Jose Moura, Carnegie Mellon University 'Volume I of Inference and Learning from Data provides a foundational treatment of one of the most topical aspects of contemporary signal and information processing, written by one of the most talented expositors in the field. It is a valuable resource both as a textbook for students wishing to enter the field and as a reference work for practicing engineers.' Vincent Poor, Princeton University 'Inference and Learning from Data, Vol. I: Foundations offers an insightful and well-integrated primer with just the right balance of everything that new graduate students need to put their research on a solid footing. It covers foundations in a modern way - emphasizing the most useful concepts, including proofs, and timely topics which are often missing from introductory graduate texts. All in one beautifully written textbook. An impressive feat! I highly recommend it.' Nikolaos Sidiropoulos, University of Virginia 'This exceptional encyclopedic work on learning from data will be the bible of the field for many years to come. Totaling more than 3000 pages, this three-volume book covers in an exhaustive and timely manner the topic of data science, which has become critically important to many areas and lies at the basis of modern signal processing, machine learning, artificial intelligence, and their numerous applications. Written by an authority in the field, the book is really unique in scale and breadth, and it will be an invaluable source of information for students, researchers, and practitioners alike.' Peter Stoica, Uppsala University 'Very meticulous, thorough, and timely. This volume is largely focused on optimization, which is so important in the modern-day world of data science, signal processing, and machine learning. The book is classical and modern at the same time - many classical topics are nicely linked to modern topics of current interest. All the necessary mathematical background is covered. Professor Sayed is one of the foremost researchers and educators in the field and the writing style is unhurried and clear with many examples, truly reflecting the towering scholar that he is. This volume is so complete that it can be used for self-study, as a classroom text, and as a timeless research reference.' P. P. Vaidyanathan, Caltech 'The book series is timely and indispensable. It is a unique companion for graduate students and early-career researchers. The three volumes provide an extraordinary breadth and depth of techniques and tools, and encapsulate the experience and expertise of a world-class expert in the field. The pedagogically crafted text is written lucidly, yet never compromises rigor. Theoretical concepts are enhanced with illustrative figures, well-thought problems, intuitive examples, datasets, and MATLAB codes that reinforce readers' learning.' Abdelhak Zoubir, TU Darmstadt

Papildus informācija

Discover data-driven learning methods with the third volume of this extraordinary three-volume set.
VOLUME III LEARNING
Preface xxvii
P.1 Emphasis on Foundations xxvii
P.2 Glimpse of History xxix
P.3 Organization of the Text xxxi
P.4 How to Use the Text xxxiv
P.5 Simulation Datasets xxxvii
P.6 Acknowledgments xl
Notation xlv
50 Least-Squares Problems
2165(56)
50.1 Motivation
2165(5)
50.2 Normal Equations
2170(17)
50.3 Recursive Least-Squares
2187(8)
50.4 Implicit Bias
2195(2)
50.5 Commentaries and Discussion
2197(13)
Problems
2202(8)
50.A Minimum-Norm Solution
2210(1)
50.B Equivalence in Linear Estimation
2211(1)
50.C Extended Least-Squares
2212(9)
References
2217(4)
51 Regularization
2221(39)
51.1 Three Challenges
2222(3)
51.2 l2-Regularization
2225(5)
51.3 l1-Regularization
2230(4)
51.4 Soft Thresholding
2234(8)
51.5 Commentaries and Discussion
2242(8)
Problems
2245(5)
51.A Constrained Formulations for Regularization
2250(3)
51.B Expression for LASSO Solution
2253(7)
References
2257(3)
52 Nearest-Neighbor Rule
2260(30)
52.1 Bayes Classifier
2262(3)
52.2 k-NN Classifier
2265(3)
52.3 Performance Guarantee
2268(2)
52.4 k-Means Algorithm
2270(9)
52.5 Commentaries and Discussion
2279(5)
Problems
2282(2)
52.A Performance of the NN Classifier
2284(6)
References
2287(3)
53 Self-Organizing Maps
2290(23)
53.1 Grid Arrangements
2290(3)
53.2 Training Algorithm
2293(9)
53.3 Visualization
2302(8)
53.4 Commentaries and Discussion
2310(3)
Problems
2310(1)
References
2311(2)
54 Decision Trees
2313(28)
54.1 Trees and Attributes
2313(4)
54.2 Selecting Attributes
2317(10)
54.3 Constructing a Tree
2327(8)
54.4 Commentaries and Discussion
2335(6)
Problems
2337(1)
References
2338(3)
55 Naive Bayes Classifier
2341(16)
55.1 Independence Condition
2341(2)
55.2 Modeling the Conditional Distribution
2343(1)
55.3 Estimating the Priors
2344(7)
55.4 Gaussian Nai've Classifier
2351(1)
55.5 Commentaries and Discussion
2352(5)
Problems
2354(2)
References
2356(1)
56 Linear Discriminant Analysis
2357(26)
56.1 Discriminant Functions
2357(3)
56.2 Linear Discriminant Algorithm
2360(2)
56.3 Minimum Distance Classifier
2362(3)
56.4 Fisher Discriminant Analysis
2365(13)
56.5 Commentaries and Discussion
2378(5)
Problems
2379(2)
References
2381(2)
57 Principal Component Analysis
2383(41)
57.1 Data Preprocessing
2383(2)
57.2 Dimensionality Reduction
2385(11)
57.3 Subspace Interpretations
2396(3)
57.4 Sparse PCA
2399(5)
57.5 Probabilistic PCA
2404(7)
57.6 Commentaries and Discussion
2411(6)
Problems
2414(3)
57.A Maximum Likelihood Solution
2417(4)
57.B Alternative Optimization Problem
2421(3)
References
2422(2)
58 Dictionary Learning
2424(33)
58.1 Learning Under Regularization
2425(5)
58.2 Learning Under Constraints
2430(2)
58.3 K-SVD Approach
2432(3)
58.4 Nonnegative Matrix Factorization
2435(8)
58.5 Commentaries and Discussion
2443(5)
Problems
2446(2)
58.A Orthogonal Matching Pursuit
2448(9)
References
2454(3)
59 Logistic Regression
2457(42)
59.1 Logistic Model
2457(2)
59.2 Logistic Empirical Risk
2459(5)
59.3 Multiclass Classification
2464(7)
59.4 Active Learning
2471(5)
59.5 Domain Adaptation
2476(8)
59.6 Commentaries and Discussion
2484(8)
Problems
2488(4)
59.A Generalized Linear Models
2492(7)
References
2496(3)
60 Perceptron
2499(31)
60.1 Linear Separability
2499(2)
60.2 Perceptron Empirical Risk
2501(6)
60.3 Termination in Finite Steps
2507(2)
60.4 Pocket Perceptron
2509(4)
60.5 Commentaries and Discussion
2513(7)
Problems
2517(3)
60.A Counting Theorem
2520(6)
60.B Boolean Functions
2526(4)
References
2528(2)
61 Support Vector Machines
2530(27)
61.1 SVM Empirical Risk
2530(11)
61.2 Convex Quadratic Program
2541(5)
61.3 Cross Validation
2546(5)
61.4 Commentaries and Discussion
2551(6)
Problems
2553(1)
References
2554(3)
62 Bagging and Boosting
2557(30)
62.1 Bagging Classifiers
2557(4)
62.2 AdaBoost Classifier
2561(11)
62.3 Gradient Boosting
2572(8)
62.4 Commentaries and Discussion
2580(7)
Problems
2581(3)
References
2584(3)
63 Kernel Methods
2587(63)
63.1 Motivation
2587(3)
63.2 Nonlinear Mappings
2590(2)
63.3 Polynomial and Gaussian Kernels
2592(3)
63.4 Kernel-Based Perceptron
2595(9)
63.5 Kernel-Based SVM
2604(6)
63.6 Kernel-Based Ridge Regression
2610(3)
63.7 Kernel-Based Learning
2613(5)
63.8 Kernel PCA
2618(5)
63.9 Inference under Gaussian Processes
2623(11)
63.10 Commentaries and Discussion
2634(16)
Problems
2640(6)
References
2646(4)
64 Generalization Theory
2650(65)
64.1 Curse of Dimensionality
2650(4)
64.2 Empirical Risk Minimization
2654(3)
64.3 Generalization Ability
2657(5)
64.4 VC Dimension
2662(1)
64.5 Bias Variance Trade-off'
2663(4)
64.6 Surrogate Risk Functions
2667(5)
64.7 Commentaries and Discussion
2672(14)
Problems
2679(7)
64.A VC Dimension for Linear Classifiers
2686(2)
64.B Sauer Lemma
2688(6)
64.C Vapnik-Chervonenkis Bound
2694(7)
64.D Rademacher Complexity
2701(14)
References
2711(4)
65 Feedforward Neural Networks
2715(82)
65.1 Activation Functions
2716(5)
65.2 Feedforward Networks
2721(7)
65.3 Regression and Classification
2728(3)
65.4 Calculation of Gradient Vectors
2731(8)
65.5 Backpropagation Algorithm
2739(11)
65.6 Dropout Strategy
2750(4)
65.7 Regularized Cross-Entropy Risk
2754(14)
65.8 Slowdown in Learning
2768(1)
65.9 Batch Normalization
2769(7)
65.10 Commentaries and Discussion
2776(11)
Problems
2781(6)
65.A Derivation of Batch Normalization Algorithm
2787(10)
References
2792(5)
66 Deep Belief Networks
2797(41)
66.1 Pre-Training Using Stacked Autoencoders
2797(5)
66.2 Restricted Boltzmann Machines
2802(7)
66.3 Contrastive Divergence
2809(11)
66.4 Pre-Training using Stacked RBMs
2820(3)
66.5 Deep Generative Model
2823(7)
66.6 Commentaries and Discussion
2830(8)
Problems
2834(2)
References
2836(2)
67 Convolutional Networks
2838(67)
67.1 Correlation Layers
2839(21)
67.2 Pooling
2860(9)
67.3 Full Network
2869(7)
67.4 Training Algorithm
2876(9)
67.5 Commentaries and Discussion
2885(3)
Problems
2887(1)
67.A Derivation of Training Algorithm
2888(17)
References
2903(2)
68 Generative Networks
2905(62)
68.1 Variational Autoeneoders
2905(8)
68.2 Training Variational Autoeneoders
2913(17)
68.3 Conditional Variational Autoeneoders
2930(5)
68.4 Generative Adversarial Networks
2935(8)
68.5 Training of GANs
2943(13)
68.6 Conditional GANs
2956(4)
68.7 Commentaries and Discussion
2960(7)
Problems
2963(1)
References
2964(3)
69 Recurrent Networks
2967(75)
69.1 Recurrent Neural Networks
2967(6)
69.2 Backpropagation Through Time
2973(22)
69.3 Bidirectional Recurrent Networks
2995(7)
69.4 Vanishing and Exploding Gradients
3002(2)
69.5 Long Short-Term Memory Networks
3004(22)
69.6 Bidirectional LSTMs
3026(8)
69.7 Gated Recurrent Units
3034(2)
69.8 Commentaries and Discussion
3036(6)
Problems
3037(3)
References
3040(2)
70 Explainable Learning
3042(23)
70.1 Classifier Model
3042(4)
70.2 Sensitivity Analysis
3046(3)
70.3 Gradient X Input Analysis
3049(1)
70.4 Relevance Analysis
3050(10)
70.5 Commentaries and Discussion
3060(5)
Problems
3061(1)
References
3062(3)
71 Adversarial Attacks
3065(34)
71.1 Types of Attacks
3066(4)
71.2 Fast Gradient Sign Method
3070(5)
71.3 Jacobian Saliency Map Approach
3075(3)
71.4 DeepFool Technique
3078(10)
71.5 Black-Box Attacks
3088(3)
71.6 Defense Mechanisms
3091(2)
71.7 Commentaries and Discussion
3093(6)
Problems
3095(1)
References
3096(3)
72 Meta Learning
3099(50)
72.1 Network Model
3099(2)
72.2 Siamese Networks
3101(11)
72.3 Relation Networks
3112(6)
72.4 Exploration Models
3118(18)
72.5 Commentaries and Discussion
3136(2)
Problems
3136(2)
72.A Matching Networks
3138(6)
72.B Prototypical Networks
3144(5)
References
3146(3)
Author Index 3149(24)
Subject, Index 3173
Ali H. Sayed is Professor and Dean of Engineering at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. He has also served as Distinguished Professor and Chairman of Electrical Engineering at the University of California, Los Angeles, USA, and as President of the IEEE Signal Processing Society. He is a member of the US National Academy of Engineering (NAE) and The World Academy of Sciences (TWAS), and a recipient of the 2022 IEEE Fourier Award and the 2020 IEEE Norbert Wiener Society Award. He is a Fellow of the IEEE.