Atjaunināt sīkdatņu piekrišanu

E-grāmata: Perturbations, Optimization, and Statistics

Edited by (Technion -- Israel Institute of Technology), Edited by (Microsoft Research), Edited by (Google, Inc.)
Citas grāmatas par šo tēmu:
  • Formāts - PDF+DRM
  • Cena: 140,26 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
Citas grāmatas par šo tēmu:

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

In nearly all machine learning, decisions must be made given current knowledge. Surprisingly, making what is believed to be the best decision is not always the best strategy, even when learning in a supervised learning setting. An emerging body of work on learning under different rules applies perturbations to decision and learning procedures. These methods provide simple and highly efficient learning rules with improved theoretical guarantees. This book describes perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees, offering readers a state-of-the-art overview. Chapters address recent modeling ideas that have arisen within the perturbations framework, including Perturb & MAP, herding, and the use of neural networks to map generic noise to distribution over highly structured data. They describe new learning procedures for perturbation models, including an improved EM algorithm and a learning algorithm that aims to match moments of model samples to moments of data. They discuss understanding the relation of perturbation models to their traditional counterparts, with one chapter showing that the perturbations viewpoint can lead to new algorithms in the traditional setting. And they consider perturbation-based regularization in neural networks, offering a more complete understanding of dropout and studying perturbations in the context of deep neural networks.
Preface ix
1 Introduction
1(16)
Tamir Hazan
George Papandreou
Daniel Tarlow
1.1 Scope
1(3)
1.2 Regularization
4(5)
1.3 Modeling
9(3)
1.4 Roadmap
12(2)
1.5 References
14(3)
2 Perturb-and-MAP Random Fields
17(28)
George Papandreou
Alan L. Yuille
2.1 Energy-Based Models: Deterministic vs. Probabilistic Approaches
19(4)
2.2 Perturb-and-MAP for Gaussian and Sparse Continuous MRFs
23(5)
2.3 Perturb-and-MAP for MRFs with Discrete Labels
28(7)
2.4 On the Representation Power of the Perturb-and-MAP Model
35(3)
2.5 Related Work and Recent Developments
38(2)
2.6 Discussion
40(1)
2.7 References
41(4)
3 Factorizing Shortest Paths with Randomized Optimum Models
45(28)
Daniel Tarlow
Alexander Gaunt
Ryan Adams
Richard S. Zemel
3.1 Introduction
45(2)
3.2 Building Structured Models: Design Considerations
47(1)
3.3 Randomized Optimum Models (RandOMs)
48(6)
3.4 Learning RandOMs
54(2)
3.5 RandOMs for Image Registration
56(1)
3.6 Shortest Path Factorization
56(2)
3.7 Shortest Path Factorization with RandOMs
58(5)
3.8 Experiments
63(5)
3.9 Related Work
68(2)
3.10 Discussion
70(1)
3.11 References
70(3)
4 Herding as a Learning System with Edge-of-Chaos Dynamics
73(54)
Yutian Chen
Max Welling
4.1 Introduction
74(3)
4.2 Herding Model Parameters
77(22)
4.3 Generalized Herding
99(10)
4.4 Experiments
109(9)
4.5 Summary
118(2)
4.6 Conclusion
120(3)
4.8 References
123(4)
5 Learning Maximum A-Posteriori Perturbation Models
127(34)
Andreea Gane
Tamir Hazan
Tommi Jaakkola
5.1 Introduction
128(2)
5.2 Background and Notation
130(1)
5.3 Expressive Power of Perturbation Models
131(1)
5.4 Higher Order Dependencies
132(2)
5.5 Markov Properties and Perturbation Models
134(2)
5.6 Conditional Distributions
136(5)
5.7 Learning Perturbation Models
141(8)
5.8 Empirical Results
149(3)
5.9 Perturbation Models and Stability
152(3)
5.10 Related Work
155(1)
5.11 References
156(5)
6 On the Expected Value of Random Maximum A-Posteriori Perturbations
161(32)
Tamir Hazan
Tommi Jaakkola
6.1 Introduction
161(3)
6.2 Inference and Random Perturbations
164(5)
6.3 Low-Dimensional Perturbations
169(13)
6.4 Empirical Evaluation
182(6)
6.5 References
188(5)
7 A Poisson Process Model for Monte Carlo
193(40)
Chris J. Maddison
7.1 Introduction
193(3)
7.2 Poisson Processes
196(7)
7.3 Exponential Races
203(7)
7.4 Gumbel Processes
210(6)
7.5 Monte Carlo Methods That Use Bounds
216(10)
7.6 Conclusion
226(4)
7.9 References
230(3)
8 Perturbation Techniques in Online Learning and Optimization
233(32)
Jacob Abernethy
Chansoo Lee
Ambuj Tewari
8.1 Introduction
233(2)
8.2 Preliminaries
235(2)
8.3 Gradient-Based Prediction Algorithm
237(8)
8.4 Generic Bounds
245(2)
8.5 Experts Setting
247(5)
8.6 Euclidean Balls Setting
252(2)
8.7 The Multi-Armed Bandit Setting
254(8)
8.9 References
262(3)
9 Probabilistic Inference by Hashing and Optimization
265(24)
Stefano Ermon
9.1 Introduction
265(3)
9.2 Problem Statement and Assumptions
268(2)
9.3 Approximate Model Counting via Randomized Hashing
270(4)
9.4 Probabilistic Models and Approximate Inference: The WISH Algorithm
274(5)
9.5 Optimization Subject to Parity Constraints
279(2)
9.6 Applications
281(1)
9.7 Open Problems and Research Challenges
282(2)
9.8 Conclusion
284(1)
9.9 References
285(4)
10 Perturbation Models and PAC-Bayesian Generalization Bounds
289(22)
Joseph Keshet
Subhransu Maji
Tamir Hazan
Tommi Jaakkola
10.1 Introduction
290(2)
10.2 Background
292(2)
10.3 PAC-Bayesian Generalization Bounds
294(2)
10.4 Algorithms
296(2)
10.5 The Bayesian Perspective
298(3)
10.6 Approximate Inference
301(1)
10.7 Empirical Evaluation
302(4)
10.8 Discussion
306(1)
10.9 References
307(4)
11 Adversarial Perturbations of Deep Neural Networks
311(32)
David Warde-Farley
Ian Goodfellow
11.1 Introduction
312(1)
11.2 Adversarial Examples
312(17)
11.3 Adversarial Training
329(1)
11.4 Generative Adversarial Networks
330(8)
11.5 Discussion
338(1)
11.6 References
339(4)
12 Data Augmentation via Levy Processes
343(32)
Stefan Wager
William Fithian
Percy Liang
12.1 Introduction
343(6)
12.2 Levy Thinning
349(12)
12.3 Examples
361(4)
12.4 Simulation Experiments
365(3)
12.5 Discussion
368(1)
12.6 Appendix: Proof of Theorem 12.4
369(2)
12.7 References
371(4)
13 Bilu-Linial Stability
375
Konstantin Makarychev
Yury Makarychev
13.1 Introduction
375(5)
13.2 Stable Instances of Graph Partitioning Problems
380(11)
13.3 Stable Instances of Clustering Problems
391(9)
13.4 References
400