Atjaunināt sīkdatņu piekrišanu

Garbage Collection Handbook: The Art of Automatic Memory Management [Mīkstie vāki]

4.42/5 (53 ratings by Goodreads)
(University of Kent,Canterbury, Kent, United Kingdom), , (University of Massachusetts, Amherst, USA)
  • Mīkstie vāki
  • Cena: 68,14 €*
  • * Šī grāmata vairs netiek publicēta. Jums tiks paziņota lietotas grāmatas cena
  • Šī grāmata vairs netiek publicēta. Jums tiks paziņota lietotas grāmatas cena.
  • Daudzums:
  • Ielikt grozā
  • Pievienot vēlmju sarakstam

Published in 1996, Richard Jones’s Garbage Collection was a milestone in the area of automatic memory management. The field has grown considerably since then, sparking a need for an updated look at the latest state-of-the-art developments. The Garbage Collection Handbook: The Art of Automatic Memory Management brings together a wealth of knowledge gathered by automatic memory management researchers and developers over the past fifty years. The authors compare the most important approaches and state-of-the-art techniques in a single, accessible framework.





The book addresses new challenges to garbage collection made by recent advances in hardware and software. It explores the consequences of these changes for designers and implementers of high performance garbage collectors. Along with simple and traditional algorithms, the book covers parallel, incremental, concurrent, and real-time garbage collection. Algorithms and concepts are often described with pseudocode and illustrations.





The nearly universal adoption of garbage collection by modern programming languages makes a thorough understanding of this topic essential for any programmer. This authoritative handbook gives expert insight on how different collectors work as well as the various issues currently facing garbage collectors. Armed with this knowledge, programmers can confidently select and configure the many choices of garbage collectors.



Web Resource
The book’s online bibliographic database at www.gchandbook.org includes over 2,500 garbage collection-related publications. Continually updated, it contains abstracts for some entries and URLs or DOIs for most of the electronically available ones. The database can be searched online or downloaded as BibTeX, PostScript, or PDF.



E-book
This edition enhances the print version with copious clickable links to algorithms, figures, original papers and definitions of technical terms. In addition, each index entry links back to where it was mentioned in the text, and each entry in the bibliography includes links back to where it was cited.

Recenzijas

The Garbage Collection Handbook is the most up-to-date, detailed, and exhaustive collation and description of the current state of the art of Garbage Collection and Automatic Memory Management available today. It is an imperative reference book for anyone working in the field, and I would consider it the textbook of reference covering GC 101 thru GC 530 course levels, if such courses were given at universities worldwide. As CTO of Azul Systems and co-creator of multiple modern concurrent collectors, Richard Jones previous Garbage Collection book was indispensable to my work over the years. The Garbage Collection Handbook has immediately taken its place. Each of our GC engineers has a copy on their desk. Gil Tene, Chief Technical Officer and co-founder of Azul Systems

In a field replete with ephemera, this book, just like its predecessor, stands as a monumental work that will last for decades. Dr. Mario Wolczko, Research Director, Oracle Labs

List of Algorithms
xv
List of Figures
xix
List of Tables
xxi
Preface xxiii
Acknowledgements xxvii
Authors xxix
1 Introduction
1(16)
1.1 Explicit deallocation
2(1)
1.2 Automatic dynamic memory management
3(2)
1.3 Comparing garbage collection algorithms
5(4)
Safety
6(1)
Throughput
6(1)
Completeness and promptness
6(1)
Pause time
7(1)
Space overhead
8(1)
Optimisations for specific languages
8(1)
Scalability and portability
9(1)
1.4 A performance disadvantage?
9(1)
1.5 Experimental methodology
10(1)
1.6 Terminology and notation
11(6)
The heap
11(1)
The mutator and the collector
12(1)
The mutator roots
12(1)
References, fields and addresses
13(1)
Liveness, correctness and reachability
13(1)
Pseudo-code
14(1)
The allocator
14(1)
Mutator read and write operations
14(1)
Atomic operations
15(1)
Sets, multisets, sequences and tuples
15(2)
2 Mark-sweep garbage collection
17(14)
2.1 The mark-sweep algorithm
18(2)
2.2 The tricolour abstraction
20(1)
2.3 Improving mark-sweep
21(1)
2.4 Bitmap marking
22(2)
2.5 Lazy sweeping
24(3)
2.6 Cache misses in the marking loop
27(2)
2.7 Issues to consider
29(2)
Mutator overhead
29(1)
Throughput
29(1)
Space usage
29(1)
To move or not to move?
30(1)
3 Mark-compact garbage collection
31(12)
3.1 Two-finger compaction
32(2)
3.2 The Lisp 2 algorithm
34(2)
3.3 Threaded compaction
36(2)
3.4 One-pass algorithms
38(2)
3.5 Issues to consider
40(3)
Is compaction necessary?
40(1)
Throughput costs of compaction
41(1)
Long-lived data
41(1)
Locality
41(1)
Limitations of mark-compact algorithms
42(1)
4 Copying garbage collection
43(14)
4.1 Semispace copying collection
43(3)
Work list implementations
44(2)
An example
46(1)
4.2 Traversal order and locality
46(7)
4.3 Issues to consider
53(4)
Allocation
53(1)
Space and locality
54(1)
Moving objects
55(2)
5 Reference counting
57(20)
5.1 Advantages and disadvantages of reference counting
58(2)
5.2 Improving efficiency
60(1)
5.3 Deferred reference counting
61(2)
5.4 Coalesced reference counting
63(3)
5.5 Cyclic reference counting
66(6)
5.6 Limited-field reference counting
72(1)
5.7 Issues to consider
73(4)
The environment
73(1)
Advanced solutions
74(3)
6 Comparing garbage collectors
77(10)
6.1 Throughput
77(1)
6.2 Pause time
78(1)
6.3 Space
78(1)
6.4 Implementation
79(1)
6.5 Adaptive systems
80(1)
6.6 A unified theory of garbage collection
80(7)
Abstract garbage collection
81(1)
Tracing garbage collection
81(1)
Reference counting garbage collection
82(5)
7 Allocation
87(16)
7.1 Sequential allocation
87(1)
7.2 Free-list allocation
88(5)
First-fit allocation
89(1)
Next-fit allocation
90(1)
Best-fit allocation
90(2)
Speeding free-list allocation
92(1)
7.3 Fragmentation
93(1)
7.4 Segregated-fits allocation
93(3)
Fragmentation
95(1)
Populating size classes
95(1)
7.5 Combining segregated-fits with first-, best- and next-fit
96(1)
7.6 Additional considerations
97(4)
Alignment
97(1)
Size constraints
98(1)
Boundary tags
98(1)
Heap parsability
98(2)
Locality
100(1)
Wilderness preservation
100(1)
Crossing maps
101(1)
7.7 Allocation in concurrent systems
101(1)
7.8 Issues to consider
102(1)
8 Partitioning the heap
103(8)
8.1 Terminology
103(1)
8.2 Why to partition
103(5)
Partitioning by mobility
104(1)
Partitioning by size
104(1)
Partitioning for space
104(1)
Partitioning by kind
105(1)
Partitioning for yield
105(1)
Partitioning for responsiveness
106(1)
Partitioning for locality
106(1)
Partitioning by thread
107(1)
Partitioning by availability
107(1)
Partitioning by mutability
108(1)
8.3 How to partition
108(1)
8.4 When to partition
109(2)
9 Generational garbage collection
111(26)
9.1 Example
112(1)
9.2 Measuring time
113(1)
9.3 Generational hypotheses
113(1)
9.4 Generations and heap layout
114(1)
9.5 Multiple generations
115(1)
9.6 Age recording
116(5)
En masse promotion
116(1)
Aging semispaces
116(3)
Survivor spaces and flexibility
119(2)
9.7 Adapting to program behaviour
121(2)
Appel-style garbage collection
121(2)
Feedback controlled promotion
123(1)
9.8 Inter-generational pointers
123(3)
Remembered sets
124(1)
Pointer direction
125(1)
9.9 Space management
126(1)
9.10 Older-first garbage collection
127(3)
9.11 Beltway
130(2)
9.12 Analytic support for generational collection
132(1)
9.13 Issues to consider
133(1)
9.14 Abstract generational garbage collection
134(3)
10 Other partitioned schemes
137(24)
10.1 Large object spaces
137(3)
The Treadmill garbage collector
138(1)
Moving objects with operating system support
139(1)
Pointer-free objects
140(1)
10.2 Topological collectors
140(9)
Mature object space garbage collection
140(3)
Connectivity-based garbage collection
143(1)
Thread-local garbage collection
144(3)
Stack allocation
147(1)
Region inferencing
148(1)
10.3 Hybrid mark-sweep, copying collectors
149(7)
Garbage-First
150(1)
Immix and others
151(3)
Copying collection in a constrained memory space
154(2)
10.4 Bookmarking garbage collection
156(1)
10.5 Ulterior reference counting
157(1)
10.6 Issues to consider
158(3)
11 Run-time interface
161(52)
11.1 Interface to allocation
161(5)
Speeding allocation
164(1)
Zeroing
165(1)
11.2 Finding pointers
166(18)
Conservative pointer finding
166(2)
Accurate pointer finding using tagged values
168(1)
Accurate pointer finding in objects
169(2)
Accurate pointer finding in global roots
171(1)
Accurate pointer finding in stacks and registers
171(10)
Accurate pointer finding in code
181(1)
Handling interior pointers
182(1)
Handling derived pointers
183(1)
11.3 Object tables
184(1)
11.4 References from external code
185(1)
11.5 Stack barriers
186(1)
11.6 GC-safe points and mutator suspension
187(3)
11.7 Garbage collecting code
190(1)
11.8 Read and write barriers
191(12)
Engineering
191(1)
Precision of write barriers
192(2)
Hash tables
194(1)
Sequential store buffers
195(1)
Overflow action
196(1)
Card tables
197(2)
Crossing maps
199(2)
Summarising cards
201(1)
Hardware and virtual memory techniques
202(1)
Write barrier mechanisms: in summary
202(1)
Chunked lists
203(1)
11.9 Managing address space
203(2)
11.10 Applications of virtual memory page protection
205(3)
Double mapping
206(1)
Applications of no-access pages
206(2)
11.11 Choosing heap size
208(2)
11.12 Issues to consider
210(3)
12 Language-specific concerns
213(16)
12.1 Finalisation
213(8)
When do finalisers run?
214(1)
Which thread runs a finaliser?
215(1)
Can finalisers run concurrently with each other?
216(1)
Can finalisers access the object that became unreachable?
216(1)
When are finalised objects reclaimed?
216(1)
What happens if there is an error in a finaliser?
217(1)
Is there any guaranteed order to finalisation?
217(1)
The finalisation race problem
218(1)
Finalisers and locks
219(1)
Finalisation in particular languages
219(2)
For further study
221(1)
12.2 Weak references
221(7)
Additional motivations
222(1)
Supporting multiple pointer strengths
223(2)
Using Phantom objects to control finalisation order
225(1)
Race in weak pointer clearing
226(1)
Notification of weak pointer clearing
226(1)
Weak pointers in other languages
226(2)
12.3 Issues to consider
228(1)
13 Concurrency preliminaries
229(46)
13.1 Hardware
229(5)
Processors and threads
229(1)
Interconnect
230(1)
Memory
231(1)
Caches
231(1)
Coherence
232(1)
Cache coherence performance example: spin locks
232(2)
13.2 Hardware memory consistency
234(3)
Fences and happens-before
236(1)
Consistency models
236(1)
13.3 Hardware primitives
237(6)
Compare-and-swap
237(1)
Load-linked/store-conditionally
238(2)
Atomic arithmetic primitives
240(1)
Test then test-and-set
240(1)
More powerful primitives
240(2)
Overheads of atomic primitives
242(1)
13.4 Progress guarantees
243(2)
Progress guarantees and concurrent collection
244(1)
13.5 Notation used for concurrent algorithms
245(1)
13.6 Mutual exclusion
246(2)
13.7 Work sharing and termination detection
248(5)
Rendezvous barriers
251(2)
13.8 Concurrent data structures
253(14)
Concurrent stacks
256(1)
Concurrent queue implemented with singly linked list
256(5)
Concurrent queue implemented with array
261(6)
A concurrent deque for work stealing
267(1)
13.9 Transactional memory
267(6)
What is transactional memory?
267(3)
Using transactional memory to help implement collection
270(2)
Supporting transactional memory in the presence of garbage collection
272(1)
13.10 Issues to consider
273(2)
14 Parallel garbage collection
275(32)
14.1 Is there sufficient work to parallelise?
276(1)
14.2 Load balancing
277(1)
14.3 Synchronisation
278(1)
14.4 Taxonomy
279(1)
14.5 Parallel marking
279(10)
Processor-centric techniques
280(9)
14.6 Parallel copying
289(10)
Processor-centric techniques
289(5)
Memory-centric techniques
294(5)
14.7 Parallel sweeping
299(1)
14.8 Parallel compaction
299(3)
14.9 Issues to consider
302(5)
Terminology
302(1)
Is parallel collection worthwhile?
303(1)
Strategies for balancing loads
303(1)
Managing tracing
303(2)
Low-level synchronisation
305(1)
Sweeping and compaction
305(1)
Termination
306(1)
15 Concurrent garbage collection
307(16)
15.1 Correctness of concurrent collection
309(6)
The tricolour abstraction, revisited
309(1)
The lost object problem
310(2)
The strong and weak tricolour invariants
312(1)
Precision
313(1)
Mutator colour
313(1)
Allocation colour
314(1)
Incremental update solutions
314(1)
Snapshot-at-the-beginning solutions
314(1)
15.2 Barrier techniques for concurrent collection
315(6)
Grey mutator techniques
315(2)
Black mutator techniques
317(1)
Completeness of barrier techniques
317(1)
Concurrent write barrier mechanisms
318(1)
One-level card tables
319(1)
Two-level card tables
319(1)
Reducing work
320(1)
15.3 Issues to consider
321(2)
16 Concurrent mark-sweep
323(14)
16.1 Initialisation
323(1)
16.2 Termination
324(1)
16.3 Allocation
325(1)
16.4 Concurrent marking and sweeping
326(2)
16.5 On-the-fly marking
328(3)
Write barriers for on-the-fly collection
328(1)
Doligez-Leroy-Gonthier
329(1)
Doligez-Leroy-Gonthier for Java
330(1)
Sliding views
331(1)
16.6 Abstract concurrent collection
331(4)
The collector wavefront
334(1)
Adding origins
334(1)
Mutator barriers
334(1)
Precision
334(1)
Instantiating collectors
335(1)
16.7 Issues to consider
335(2)
17 Concurrent copying & compaction
337(26)
17.1 Mostly-concurrent copying: Baker's algorithm
337(3)
Mostly-concurrent, mostly-copying collection
338(2)
17.2 Brooks's indirection barrier
340(1)
17.3 Self-erasing read barriers
340(1)
17.4 Replication copying
341(1)
17.5 Multi-version copying
342(3)
Extensions to avoid copy-on-write
344(1)
17.6 Sapphire
345(6)
Collector phases
346(5)
Merging phases
351(1)
Volatile fields
351(1)
17.7 Concurrent compaction
351(10)
Compressor
352(3)
Pauseless
355(6)
17.8 Issues to consider
361(2)
18 Concurrent reference counting
363(12)
18.1 Simple reference counting revisited
363(3)
18.2 Buffered reference counting
366(1)
18.3 Concurrent, cyclic reference counting
366(2)
18.4 Taking a snapshot of the heap
368(1)
18.5 Sliding views reference counting
369(5)
Age-oriented collection
370(1)
The algorithm
370(2)
Sliding views cycle reclamation
372(1)
Memory consistency
373(1)
18.6 Issues to consider
374(1)
19 Real-time garbage collection
375(44)
19.1 Real-time systems
375(1)
19.2 Scheduling real-time collection
376(1)
19.3 Work-based real-time collection
377(9)
Parallel, concurrent replication
377(7)
Uneven work and its impact on work-based scheduling
384(2)
19.4 Slack-based real-time collection
386(5)
Scheduling the collector work
389(1)
Execution overheads
390(1)
Programmer input
391(1)
19.5 Time-based real-time collection: Metronome
391(8)
Mutator utilisation
391(2)
Supporting predictability
393(2)
Analysis
395(4)
Robustness
399(1)
19.6 Combining scheduling approaches: Tax-and-Spend
399(4)
Tax-and-Spend scheduling
400(1)
Tax-and-Spend prerequisites
401(2)
19.7 Controlling fragmentation
403(12)
Incremental compaction in Metronome
404(1)
Incremental replication on uniprocessors
405(1)
Stopless: lock-free garbage collection
406(1)
Staccato: best-effort compaction with mutator wait-freedom
407(3)
Chicken: best-effort compaction with mutator wait-freedom for x86
410(1)
Clover: guaranteed compaction with probabilistic mutator lock-freedom
410(2)
Stopless versus Chicken versus Clover
412(1)
Fragmented allocation
412(3)
19.8 Issues to consider
415(4)
Glossary 419(14)
Bibliography 433(36)
Index 469
Richard Jones is a professor of computer systems in the School of Computing at the University of Kent, Canterbury. He earned a B.A. in mathematics from Oxford University and an M.Sc. in computer science from the University of Kent. He spent a few years teaching at school and college before returning to higher education at the University of Kent.





Antony Hosking is an associate professor in the Department of Computer Science at Purdue University. He earned a B.Sc. in mathematical sciences from the University of Adelaide, Australia, an M.Sc. in computer science from the University of Waikato, New Zealand, and a Ph.D. in computer science from the University of Massachusetts. His work is in the area of programming language design and implementation, with specific interests in database and persistent programming languages, object-oriented database systems, dynamic memory management, compiler optimizations, and architectural support for programming languages and applications.





Eliot Moss is a professor in the Department of Computer Science at the University of Massachusetts Amherst. He earned a B.S.E.E., an M.S.E.E., and a Ph.D. in computer science from the Massachusetts Institute of Technology. After four years of military service, Dr. Moss joined the faculty at the University of Massachusetts Amherst. He works in the area of programming languages and their implementation and has built garbage collectors since 1978. In addition to his research on automatic memory management, he is known for his work on persistent programming languages, virtual machine implementation, and transactional memory. He worked with IBM researchers to license the Jikes RVM Java virtual machine for academic research, which eventually led to its release as an open source project.