Atjaunināt sīkdatņu piekrišanu

Bayesian Data Analysis for Animal Scientists: The Basics 2017 ed. [Hardback]

  • Formāts: Hardback, 275 pages, height x width: 235x155 mm, weight: 6215 g, 151 Illustrations, color; 9 Illustrations, black and white; XVIII, 275 p. 160 illus., 151 illus. in color., 1 Hardback
  • Izdošanas datums: 14-Sep-2017
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319542737
  • ISBN-13: 9783319542737
Citas grāmatas par šo tēmu:
  • Hardback
  • Cena: 109,38 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Standarta cena: 128,69 €
  • Ietaupiet 15%
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Formāts: Hardback, 275 pages, height x width: 235x155 mm, weight: 6215 g, 151 Illustrations, color; 9 Illustrations, black and white; XVIII, 275 p. 160 illus., 151 illus. in color., 1 Hardback
  • Izdošanas datums: 14-Sep-2017
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319542737
  • ISBN-13: 9783319542737
Citas grāmatas par šo tēmu:
In this book, we provide an easy introduction to Bayesian inference using MCMC techniques, making most topics intuitively reasonable and deriving to appendixes the more complicated matters. The biologist or the agricultural researcher does not normally have a background in Bayesian statistics, having difficulties in following the technical books introducing Bayesian techniques. The difficulties arise from the way of making inferences, which is completely different in the Bayesian school, and from the difficulties in understanding complicated matters such as the MCMC numerical methods. We compare both schools, classic and Bayesian, underlying the advantages of Bayesian solutions, and proposing inferences based in relevant differences, guaranteed values, probabilities of similitude or the use of ratios. We also give a scope of complex problems that can be solved using Bayesian statistics, and we end the book explaining the difficulties associated to model choice and the use of small

samples. The book has a practical orientation and uses simple models to introduce the reader in this increasingly popular school of inference.
1 Do We Understand Classic Statistics?
1(32)
1.1 Historical Introduction
1(3)
1.2 Test of Hypothesis
4(9)
1.2.1 The Procedure
4(3)
1.2.2 Common Misinterpretations
7(6)
1.3 Standard Errors and Confidence Intervals
13(3)
1.3.1 Definition of Standard Error and Confidence Interval...
13(1)
1.3.2 Common Misinterpretations
14(2)
1.4 Bias and Risk of an Estimator
16(2)
1.4.1 Unbiased Estimators
16(1)
1.4.2 Common Misinterpretations
16(2)
1.5 Fixed and Random Effects
18(4)
1.5.1 Definition of `Fixed' and `Random' Effects
18(1)
1.5.2 Shrinkage of Random Effects Estimates
19(1)
1.5.3 Bias, Variance and Risk of an Estimator when the Effect is Fixed or Random
20(1)
1.5.4 Common Misinterpretations
21(1)
1.6 Likelihood
22(11)
1.6.1 Definition
22(2)
1.6.2 The Method of Maximum Likelihood
24(1)
1.6.3 Common Misinterpretations
25(1)
Appendix 1.1
26(1)
Appendix 1.2
27(1)
Appendix 1.3
28(1)
Appendix 1.4
29(1)
References
30(3)
2 The Bayesian Choice
33(34)
2.1 Bayesian Inference
33(9)
2.1.1 The Foundations of Bayesian Inference
33(1)
2.1.2 Bayes Theorem
34(2)
2.1.3 Prior Information
36(4)
2.1.4 Probability Density
40(2)
2.2 Features of Bayesian Inference
42(9)
2.2.1 Point Estimates: Mean, Median and Mode
42(2)
2.2.2 Credibility Intervals
44(5)
2.2.3 Marginalisation
49(2)
2.3 Test of Hypothesis
51(3)
2.3.1 Model Choice
51(1)
2.3.2 Bayes Factors
52(1)
2.3.3 Model Averaging
53(1)
2.4 Common Misinterpretations
54(3)
2.5 Bayesian Inference in Practice
57(4)
2.6 Advantages of Bayesian Inference
61(6)
Appendix 2.1
62(1)
Appendix 2.2
63(1)
Appendix 2.3
63(1)
References
64(3)
3 Posterior Distributions
67(18)
3.1 Notation
67(1)
3.2 Probability Density Function
68(3)
3.2.1 Definition..?
68(1)
3.2.2 Transformation of Random Variables
69(2)
3.3 Features of a Distribution
71(1)
3.3.1 Mean
71(1)
3.3.2 Median
71(1)
3.3.3 Mode
72(1)
3.3.4 Credibility Intervals
72(1)
3.4 Conditional Distributions
72(4)
3.4.1 Bayes Theorem
72(1)
3.4.2 Conditional Distribution of the Sample of a Normal Distribution
73(1)
3.4.3 Conditional Posterior Distribution of the Variance of a Normal Distribution
73(2)
3.4.4 Conditional Posterior Distribution of the Mean of a Normal Distribution
75(1)
3.5 Marginal Distributions
76(9)
3.5.1 Definition
76(1)
3.5.2 Marginal Posterior Distribution of the Variance of a Normal Distribution
77(1)
3.5.3 Marginal Posterior Distribution of the Mean of a Normal Distribution
78(2)
Appendix 3.1
80(1)
Appendix 3.2
81(1)
Appendix 3.3
82(1)
Appendix 3.4
83(1)
Reference
84(1)
4 MCMC
85(18)
4.1 Samples of Marginal Posterior Distributions
86(5)
4.1.1 Taking Samples of Marginal Posterior Distributions
86(1)
4.1.2 Making Inferences from Samples of Marginal Posterior Distributions
87(4)
4.2 Gibbs Sampling
91(7)
4.2.1 How It Works
91(1)
4.2.2 Why It Works
92(2)
4.2.3 When It Works
94(1)
4.2.4 Gibbs Sampling Features
95(3)
4.3 Other MCMC Methods
98(5)
4.3.1 Acceptance-Rejection
98(2)
4.3.2 Metropolis-Hastings
100(1)
Appendix: Software for MCMC
101(1)
References
102(1)
5 The Baby Model
103(16)
5.1 The Model
103(1)
5.2 Analytical Solutions
104(5)
5.2.1 Marginal Posterior Density Function of the Mean and Variance
104(1)
5.2.2 Joint Posterior Density Function of the Mean and Variance
105(1)
5.2.3 Inferences
105(4)
5.3 Working with MCMC
109(10)
5.3.1 The Process
109(1)
5.3.2 Using Flat Priors
109(3)
5.3.3 Using Vague Informative Priors
112(2)
5.3.4 Common Misinterpretations
114(1)
Appendix 5.1
115(1)
Appendix 5.2
116(1)
Appendix 5.3
117(1)
References
118(1)
6 The Linear Model: I. The `Fixed Effects' Model
119(18)
6.1 The `Fixed Effects' Model
119(8)
6.1.1 The Model
119(5)
6.1.2 Example
124(1)
6.1.3 Common Misinterpretations
125(2)
6.2 Marginal Posterior Distributions via MCMC Using Flat Priors
127(3)
6.2.1 Joint Posterior Distribution
127(1)
6.2.2 Conditional Distributions
128(1)
6.2.3 Gibbs Sampling
129(1)
6.3 Marginal Posterior Distributions via MCMC Using Vague Informative Priors
130(2)
6.3.1 Vague Informative Priors
130(1)
6.3.2 Conditional Distributions
131(1)
6.4 Least Squares as a Bayesian Estimator
132(5)
Appendix 6.1
133(1)
Appendix 6.2
134(1)
References
135(2)
7 The Linear Model: II. The `Mixed' Model
137(30)
7.1 The Mixed Model with Repeated Records
137(8)
7.1.1 The Model
137(4)
7.1.2 Common Misinterpretations
141(1)
7.1.3 Marginal Posterior Distributions via MCMC
142(2)
7.1.4 Gibbs Sampling
144(1)
7.2 The Genetic Animal Model
145(9)
7.2.1 The Model
145(5)
7.2.2 Marginal Posterior Distributions via MCMC
150(4)
7.3 Bayesian Interpretation of BLUP and REML
154(4)
7.3.1 BLUP in a Frequentist Context
154(2)
7.3.2 BLUP in a Bayesian Context
156(2)
7.3.3 REML as a Bayesian Estimator
158(1)
7.4 The Multitrait Model
158(9)
7.4.1 The Model
158(2)
7.4.2 Data Augmentation
160(3)
7.4.3 More Complex Models
163(1)
Appendix 7.1
164(1)
References
165(2)
8 A Scope of the Possibilities of Bayesian Inference + MCMC
167(26)
8.1 Nested Models: Examples in Growth Curves
168(6)
8.1.1 The Model
168(3)
8.1.2 Marginal Posterior Distributions
171(2)
8.1.3 More Complex Models
173(1)
8.2 Modelling Residuals: Examples in Canalising Selection
174(4)
8.2.1 The Model
175(1)
8.2.2 Marginal Posterior Distributions
176(1)
8.2.3 More Complex Models
177(1)
8.3 Modelling Priors: Examples in Genomic Selection
178(15)
8.3.1 The Model
179(4)
8.3.2 RR-BLUP
183(2)
8.3.3 Bayes A
185(2)
8.3.4 Bayes B
187(1)
8.3.5 Bayes C and Bayes Cπ
188(1)
8.3.6 Bayes L (Bayesian Lasso)
188(1)
8.3.7 Bayesian Alphabet in Practice
189(1)
Appendix 8.1
190(1)
References
191(2)
9 Prior Information
193(20)
9.1 Exact Prior Information
193(5)
9.1.1 Prior Information
193(2)
9.1.2 Posterior Probabilities with Exact Prior Information...
195(2)
9.1.3 Influence of Prior Information in Posterior Probabilities
197(1)
9.2 Vague Prior Information
198(5)
9.2.1 A Vague Definition of Vague Prior Information
198(2)
9.2.2 Examples of the Use of Vague Prior Information
200(3)
9.3 No Prior Information
203(4)
9.3.1 Hat Priors
204(1)
9.3.2 Jeffreys Prior
205(1)
9.3.3 Bernardo's `Reference' Priors
206(1)
9.4 Improper Priors
207(1)
9.5 The Achilles Heel of Bayesian Inference
208(5)
Appendix 9.1
209(1)
Appendix 9.2
210(1)
References
210(3)
10 Model Selection
213(58)
10.1 Model Selection
213(8)
10.1.1 The Purpose of Model Selection
213(4)
10.1.2 Fitting Data vs Predicting New Records
217(1)
10.1.3 Common Misinterpretations
218(3)
10.2 Hypothesis Tests
221(5)
10.2.1 Likelihood Ratio Test and Other Frequentist Tests
221(2)
10.2.2 Bayesian Model Choice
223(3)
10.3 The Concept of Information
226(7)
10.3.1 Fisher's Information
227(4)
10.3.2 Shannon Information and Entropy
231(1)
10.3.3 Kullback-Leibler Information
232(1)
10.4 Model Selection Criteria
233(38)
10.4.1 Akaike Information Criterion (AIC)
233(4)
10.4.2 Deviance Information Criterion (DIC)
237(2)
10.4.3 Bayesian Information Criterion (BIC)
239(2)
10.4.4 Model Choice in Practice
241(1)
Appendix 10.1
242(1)
Appendix 10.2
243(1)
Appendix 10.3
244(1)
Appendix 10.4
245(1)
References
246(1)
Appendix: The Bayesian Perspective---Three New Dialogues Between Hylas and Philonous
247(18)
References
265(6)
Index 271
Agustin Blasco

Professor of Animal Breeding and Genetics

Visiting scientist at ABRO (Edinburgh), INRA (Jouy en Josas) and FAO (Rome). He was President of the World Rabbit Science Association and editor in chief of the journal World Rabbit Science. His career has focused on the genetics of litter size components and genetics of meat quality in rabbits and pigs. He has published more than one hundred papers in international journals. Invited speaker several times at the European Association for Animal Production and at the World Congress on Genetics Applied to Livestock Production among others. Chapman Lecturer at the University of Wisconsin. He has taught courses on Bayesian Inference at the universities of Valencia (Spain), Edinburgh (UK), Wisconsin (USA), Padua (Italy), Sao Paulo, Lavras (Brazil), Nacional (Uruguay), Lomas (Argentina) and at INRA in Toulouse (France).