Atjaunināt sīkdatņu piekrišanu

E-grāmata: Evidence-Based Software Engineering and Systematic Reviews

(Keele University, Staffordshire, UK), (Keele University, Staffordshire, UK), (School of Engineering & Computing Sciences, Durham University, UK)
Citas grāmatas par šo tēmu:
  • Formāts - PDF+DRM
  • Cena: 53,84 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
Citas grāmatas par šo tēmu:

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

In the decade since the idea of adapting the evidence-based paradigm for software engineering was first proposed, it has become a major tool of empirical software engineering. Evidence-Based Software Engineering and Systematic Reviews provides a clear introduction to the use of an evidence-based model for software engineering research and practice.

The book explains the roles of primary studies (experiments, surveys, case studies) as elements of an over-arching evidence model, rather than as disjointed elements in the empirical spectrum. Supplying readers with a clear understanding of empirical software engineering best practices, it provides up-to-date guidance on how to conduct secondary studies in software engineeringreplacing the existing 2004 and 2007 technical reports.

The book is divided into three parts. The first part discusses the nature of evidence and the evidence-based practices centered on a systematic review, both in general and as applying to software engineering. The second part examines the different elements that provide inputs to a systematic review (usually considered as forming a secondary study), especially the main forms of primary empirical study currently used in software engineering.

The final part provides practical guidance on how to conduct systematic reviews (the guidelines), drawing together accumulated experiences to guide researchers and students in planning and conducting their own studies. The book includes an extensive glossary and an appendix that provides a catalogue of reviews that may be useful for practice and teaching.
List of Figures
xv
List of Tables
xvii
Preface xix
Glossary xxiii
I Evidence-Based Practices in Software Engineering
1(194)
1 The Evidence-Based Paradigm
3(14)
1.1 What do we mean by evidence?
4(3)
1.2 Emergence of the evidence-based movement
7(3)
1.3 The systematic review
10(4)
1.4 Some limitations of an evidence-based view of the world
14(3)
2 Evidence-Based Software Engineering (EBSE)
17(14)
2.1 Empirical knowledge before EBSE
17(2)
2.2 From opinion to evidence
19(4)
2.3 Organising evidence-based software engineering practices
23(2)
2.4 Software engineering characteristics
25(2)
2.5 Limitations of evidence-based practices in software engineering
27(4)
2.5.1 Constraints from software engineering
27(1)
2.5.2 Threats to validity
28(3)
3 Using Systematic Reviews in Software Engineering
31(8)
3.1 Systematic reviews
32(2)
3.2 Mapping studies
34(3)
3.3 Meta-analysis
37(2)
4 Planning a Systematic Review
39(16)
4.1 Establishing the need for a review
40(3)
4.2 Managing the review project
43(1)
4.3 Specifying the research questions
43(5)
4.4 Developing the protocol
48(4)
4.4.1 Background
49(1)
4.4.2 Research questions(s)
49(1)
4.4.3 Search strategy
49(1)
4.4.4 Study selection
50(1)
4.4.5 Assessing the quality of the primary studies
50(1)
4.4.6 Data extraction
51(1)
4.4.7 Data synthesis and aggregation strategy
51(1)
4.4.8 Limitations
52(1)
4.4.9 Reporting
52(1)
4.4.10 Review management
52(1)
4.5 Validating the protocol
52(3)
5 Searching for Primary Studies
55(12)
5.1 Completeness
56(3)
5.2 Validating the search strategy
59(3)
5.3 Methods of searching
62(2)
5.4 Examples of search strategies
64(3)
6 Study Selection
67(12)
6.1 Selection criteria
67(2)
6.2 Selection process
69(2)
6.3 The relationship between papers and studies
71(1)
6.4 Examples of selection criteria and process
72(7)
7 Assessing Study Quality
79(14)
7.1 Why assess quality?
79(3)
7.2 Quality assessment criteria
82(4)
7.2.1 Study quality checklists
83(3)
7.2.2 Dealing with multiple study types
86(1)
7.3 Procedures for assessing quality
86(2)
7.4 Examples of quality assessment criteria and procedures
88(5)
8 Extracting Study Data
93(8)
8.1 Overview of data extraction
93(2)
8.2 Examples of extracted data and extraction procedures
95(6)
9 Mapping Study Analysis
101(10)
9.1 Analysis of publication details
102(1)
9.2 Classification analysis
103(3)
9.3 Automated content analysis
106(4)
9.4 Clusters, gaps, and models
110(1)
10 Qualitative Synthesis
111(22)
10.1 Qualitative synthesis in software engineering research
112(1)
10.2 Qualitative analysis terminology and concepts
113(3)
10.3 Using qualitative synthesis methods in software engineering systematic reviews
116(1)
10.4 Description of qualitative synthesis methods
117(12)
10.4.1 Meta-ethnography
118(2)
10.4.2 Narrative synthesis
120(1)
10.4.3 Qualitative cross-case analysis
121(2)
10.4.4 Thematic analysis
123(1)
10.4.5 Meta-summary
124(3)
10.4.6 Vote counting
127(2)
10.5 General problems with qualitative meta-synthesis
129(4)
10.5.1 Primary study quality assessment
129(1)
10.5.2 Validation of meta-syntheses
130(3)
11 Meta-Analysis with Lech Madeyski
133(22)
11.1 Meta-analysis example
134(1)
11.2 Effect sizes
135(9)
11.2.1 Mean difference
136(2)
11.2.2 Standardised mean difference
138(1)
11.2.2.1 Standardised mean difference effect size
138(2)
11.2.2.2 Standardised difference effect size variance
140(1)
11.2.2.3 Adjustment for small sample sizes
141(1)
11.2.3 The correlation coefficient effect size
141(1)
11.2.4 Proportions and counts
142(2)
11.3 Conversion between different effect sizes
144(1)
11.3.1 Conversions between d and r
144(1)
11.3.2 Conversion between log odds and d
144(1)
11.4 Meta-analysis methods
145(3)
11.4.1 Meta-analysis models
145(1)
11.4.2 Meta-analysis calculations
146(2)
11.5 Heterogeneity
148(3)
11.6 Moderator analysis
151(1)
11.7 Additional analyses
152(3)
11.7.1 Publication bias
152(1)
11.7.2 Sensitivity analysis
153(2)
12 Reporting a Systematic Review
155(10)
12.1 Planning reports
157(1)
12.2 Writing reports
158(4)
12.3 Validating reports
162(3)
13 Tool Support for Systematic Reviews with Christopher Marshall
165(8)
13.1 Review tools in other disciplines
166(3)
13.2 Tools for software engineering reviews
169(4)
14 Evidence to Practice: Knowledge Translation and Diffusion
173(22)
14.1 What is knowledge translation?
175(2)
14.2 Knowledge translation in the context of software engineering
177(3)
14.3 Examples of knowledge translation in software engineering
180(3)
14.3.1 Assessing software cost uncertainty
180(1)
14.3.2 Effectiveness of pair programming
181(1)
14.3.3 Requirements elicitation techniques
181(1)
14.3.4 Presenting recommendations
182(1)
14.4 Diffusion of software engineering knowledge
183(1)
14.5 Systematic reviews for software engineering education
184(11)
14.5.1 Selecting the studies
185(1)
14.5.2 Topic coverage
186(1)
Further Reading for Part I
187(8)
II The Systematic Reviewer's Perspective of Primary Studies
195(96)
15 Primary Studies and Their Role in EBSE
197(14)
15.1 Some characteristics of primary studies
199(2)
15.2 Forms of primary study used in software engineering
201(2)
15.3 Ethical issues
203(2)
15.4 Reporting primary studies
205(3)
15.4.1 Meeting the needs of a secondary study
205(3)
15.4.2 What needs to be reported?
208(1)
15.5 Replicated studies
208(3)
Further reading
209(2)
16 Controlled Experiments and Quasi-Experiments
211(22)
16.1 Characteristics of controlled experiments and quasi-experiments
212(5)
16.1.1 Controlled experiments
212(2)
16.1.2 Quasi-experiments
214(1)
16.1.3 Problems with experiments in software engineering
215(2)
16.2 Conducting experiments and quasi-experiments
217(8)
16.2.1 Dependent variables, independent variables and confounding factors
218(1)
16.2.2 Hypothesis testing
219(2)
16.2.3 The design of formal experiments
221(1)
16.2.4 The design of quasi-experiments
222(1)
16.2.5 Threats to validity
223(2)
16.3 Research questions that can be answered by using experiments and quasi-experiments
225(2)
16.3.1 Pair designing
226(1)
16.3.2 Comparison of diagrammatical forms
227(1)
16.3.3 Effort estimation
227(1)
16.4 Examples from the software engineering literature
227(2)
16.4.1 Randomised experiment: Between subjects
228(1)
16.4.2 Quasi-experiment: Within-subjects before--after study
228(1)
16.4.3 Quasi-experiment: Within-subjects cross-over study
228(1)
16.4.4 Quasi-experiment: Interrupted time series
229(1)
16.5 Reporting experiments and quasi-experiments
229(4)
Further reading
230(3)
17 Surveys
233(12)
17.1 Characteristics of surveys
234(2)
17.2 Conducting surveys
236(2)
17.3 Research questions that can be answered by using surveys
238(1)
17.4 Examples of surveys from the software engineering literature
239(3)
17.4.1 Software development risk
240(1)
17.4.2 Software design patterns
240(2)
17.4.3 Use of the UML
242(1)
17.5 Reporting surveys
242(3)
Further reading
242(3)
18 Case Studies
245(14)
18.1 Characteristics of case studies
247(1)
18.2 Conducting case study research
248(5)
18.2.1 Single-case versus multiple-case
249(1)
18.2.2 Choice of the units of analysis
250(1)
18.2.3 Organising a case study
251(2)
18.3 Research questions that can be answered by using case studies
253(2)
18.4 Example of a case study from the software engineering literature
255(1)
18.4.1 Why use a case study?
255(1)
18.4.2 Case study parameters
256(1)
18.5 Reporting case studies
256(3)
Further reading
258(1)
19 Qualitative Studies
259(12)
19.1 Characteristics of a qualitative study
259(1)
19.2 Conducting qualitative research
260(2)
19.3 Research questions that can be answered using qualitative studies
262(1)
19.4 Examples of qualitative studies in software engineering
262(5)
19.4.1 Mixed qualitative and quantitative studies
263(2)
19.4.2 Fully qualitative studies
265(2)
19.5 Reporting qualitative studies
267(4)
Further reading
268(3)
20 Data Mining Studies
271(8)
20.1 Characteristics of data mining studies
272(1)
20.2 Conducting data mining research in software engineering
272(2)
20.3 Research questions that can be answered by data mining
274(1)
20.4 Examples of data mining studies
275(1)
20.5 Problems with data mining studies in software engineering
276(1)
20.6 Reporting data mining studies
277(2)
Further reading
278(1)
21 Replicated and Distributed Studies
279(12)
21.1 What is a replication study?
279(3)
21.2 Replications in software engineering
282(4)
21.2.1 Categorising replication forms
282(2)
21.2.2 How widely are replications performed?
284(2)
21.2.3 Reporting replicated studies
286(1)
21.3 Including replications in systematic reviews
286(1)
21.4 Distributed studies
287(4)
Further reading
289(2)
III Guidelines for Systematic Reviews
291(66)
22 Systematic Review and Mapping Study Procedures
293(64)
22.1 Introduction
295(2)
22.2 Preliminaries
297(1)
22.3 Review management
298(1)
22.4 Planning a systematic review
299(7)
22.4.1 The need for a systematic review or mapping study
299(3)
22.4.2 Specifying research questions
302(1)
22.4.2.1 Research questions for systematic reviews
302(1)
22.4.2.2 Research questions for mapping studies
302(2)
22.4.3 Developing the protocol
304(1)
22.4.4 Validating the protocol
304(2)
22.5 The search process
306(9)
22.5.1 The search strategy
306(1)
22.5.1.1 Is completeness critical?
306(1)
22.5.1.2 Validating the search strategy
307(2)
22.5.1.3 Deciding which search methods to use
309(1)
22.5.2 Automated searches
310(1)
22.5.2.1 Sources to search for an automated search
310(1)
22.5.2.2 Constructing search strings
311(2)
22.5.3 Selecting sources for a manual search
313(1)
22.5.4 Problems with the search process
314(1)
22.6 Primary study selection process
315(6)
22.6.1 A team-based selection process
315(3)
22.6.2 Selection processes for lone researchers
318(1)
22.6.3 Selection process problems
318(1)
22.6.4 Papers versus studies
319(2)
22.6.5 The interaction between the search and selection processes
321(1)
22.7 Validating the search and selection process
321(1)
22.8 Quality assessment
322(9)
22.8.1 Is quality assessment necessary?
323(1)
22.8.2 Quality assessment criteria
323(1)
22.8.2.1 Primary study quality
323(1)
22.8.2.2 Strength of evidence supporting review findings
324(4)
22.8.3 Using quality assessment results
328(1)
22.8.4 Managing the quality assessment process
328(1)
22.8.4.1 A team-based quality assessment process
329(1)
22.8.4.2 Quality assessment for lone researchers
330(1)
22.9 Data extraction
331(12)
22.9.1 Data extraction for quantitative systematic reviews
331(1)
22.9.1.1 Data extraction planning for quantitative systematic reviews
331(3)
22.9.1.2 Data extraction team process for quantitative systematic reviews
334(1)
22.9.1.3 Quantitative systematic reviews data extraction process for lone researchers
335(1)
22.9.2 Data extraction for qualitative systematic reviews
336(1)
22.9.2.1 Planning data extraction for qualitative systematic reviews
337(1)
22.9.2.2 Data extraction process for qualitative systematic reviews
337(1)
22.9.3 Data extraction for mapping studies
338(1)
22.9.3.1 Planning data extraction for mapping studies
338(2)
22.9.3.2 Data extraction process for mapping studies
340(2)
22.9.4 Validating the data extraction process
342(1)
22.9.5 General data extraction issues
342(1)
22.10 Data aggregation and synthesis
343(10)
22.10.1 Data synthesis for quantitative systematic reviews
343(1)
22.10.1.1 Data synthesis using meta-analysis
344(2)
22.10.1.2 Reporting meta-analysis results
346(1)
22.10.1.3 Vote counting for quantitative systematic reviews
347(1)
22.10.2 Data synthesis for qualitative systematic reviews
348(2)
22.10.3 Data aggregation for mapping studies
350(1)
22.10.3.1 Tables versus graphics
351(1)
22.10.4 Data synthesis validation
351(2)
22.11 Reporting the systematic review
353(4)
22.11.1 Systematic review readership
353(1)
22.11.2 Report structure
353(2)
22.11.3 Validating the report
355(2)
Appendix: Catalogue of Systematic Reviews Relevant to Education and Practice with Sarah Drummond and Nikki Williams
357(10)
A.1 Professional Practice (PRF)
358(1)
A.2 Modelling and Analysis (MAA)
359(2)
A.3 Software Design (DES)
361(1)
A.4 Validation and Verification (VAV)
361(1)
A.5 Software Evolution (EVO)
362(1)
A.6 Software Process (PRO)
363(1)
A.7 Software Quality (QUA)
364(1)
A.8 Software Management (MGT)
365(2)
Bibliography 367(24)
Index 391
Barbara Ann Kitchenham, David Budgen, Pearl Brereton