Atjaunināt sīkdatņu piekrišanu

E-grāmata: Dynamic and Stochastic Multi-Project Planning

  • Formāts - PDF+DRM
  • Cena: 53,52 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

This book deals with dynamic and stochastic methods for multi-project planning. Based on the idea of using queueing networks for the analysis of dynamic-stochastic multi-project environments this book addresses two problems: detailed scheduling of project activities, and integrated order acceptance and capacity planning. In an extensive simulation study, the book thoroughly investigates existing scheduling policies. To obtain optimal and near optimal scheduling policies new models and algorithms are proposed based on the theory of Markov decision processes and Approximate Dynamic programming. Then the book presents a new model for the effective computation of optimal policies based on a Markov decision process. Finally, the book provides insights into the structure of optimal policies.
1 Introduction
1(6)
1.1 Background
1(2)
1.2 Research Focus
3(2)
1.2.1 Order Acceptance and Capacity Planning
3(1)
1.2.2 Resource-Constrained Multi-project Scheduling
4(1)
1.3 Outline
5(2)
2 Problem Statements
7(12)
2.1 General Assumptions and Notation
7(5)
2.1.1 Projects
7(1)
2.1.2 Resources
8(1)
2.1.3 Project Types
9(2)
2.1.4 Objective Functions
11(1)
2.2 Dynamic-Stochastic Multi-project Scheduling Problem
12(3)
2.2.1 Non-preemptive Scheduling Problem
12(2)
2.2.2 Preemptive Scheduling Problem
14(1)
2.3 Order Acceptance and Capacity Planning Problem
15(4)
2.3.1 Multi-project Environment
15(1)
2.3.2 Order Acceptance Decisions
16(1)
2.3.3 Resource Allocation Decisions
17(2)
3 Literature Review
19(10)
3.1 Dynamic Programming and Approximate Dynamic Programming
19(2)
3.2 Project Scheduling
21(5)
3.2.1 Static--Deterministic Project Scheduling
21(1)
3.2.2 Dynamic--Deterministic Project Scheduling
22(1)
3.2.3 Static--Stochastic Project Scheduling
22(2)
3.2.4 Dynamic--Stochastic Project Scheduling
24(2)
3.3 Capacity Planning
26(1)
3.4 Order Acceptance
27(2)
4 Continuous-Time Markov Decision Processes
29(14)
4.1 General Structure
29(1)
4.2 Basic Definitions and Relevant Properties
30(2)
4.3 Objective Function
32(1)
4.4 Evaluation and Optimality Equations
33(1)
4.5 Uniformization
34(2)
4.6 General Solution Methodologies
36(3)
4.6.1 Value Iteration
36(1)
4.6.2 Policy Iteration
36(3)
4.7 Implementation
39(4)
4.7.1 Generation of the State Space
39(2)
4.7.2 Solution Methodologies
41(2)
5 Generation of Problem Instances
43(8)
5.1 Generation of Project Networks
44(1)
5.2 Generation Procedure
44(7)
5.2.1 Step 1: Assignment of Activity Types to Resource Types
44(1)
5.2.2 Step 2: Determination of Expected Durations of the Activity Types
45(2)
5.2.3 Step 3: Variation Check of the Expected Activity Durations
47(2)
5.2.4 Step 4: Adjustments to Resource Type Specific Utilizations
49(1)
5.2.5 Step 5: Check of Project Type Workloads
49(1)
5.2.6 Step 6: Storage of Additional Parameters
50(1)
6 Scheduling Using Priority Policies
51(22)
6.1 Priority Policies
51(6)
6.1.1 Computation of Rule Parameters
52(1)
6.1.2 Priority Rules
53(4)
6.2 Experimental Design
57(4)
6.2.1 Preliminaries
57(1)
6.2.2 Generation of Problem Instances
58(2)
6.2.3 Simulation Set Up
60(1)
6.3 Main Effects of Problem Parameters
61(7)
6.3.1 Due Date Tightness
61(1)
6.3.2 Number of Resources
62(2)
6.3.3 Order Strength
64(2)
6.3.4 Variation of Expected Activity Durations
66(1)
6.3.5 Utilization per Resource
66(2)
6.3.6 Observations for Problem Instances with a Single Project Type
68(1)
6.4 Detailed Analysis
68(5)
6.4.1 Performance for Special Cases
68(1)
6.4.2 Performance for the Remaining Problem Instances
69(4)
7 Optimal and Near Optimal Scheduling Policies
73(84)
7.1 Models as a Markov Decision Process
74(27)
7.1.1 Non-preemptive Scheduling Problem
74(8)
7.1.2 Preemptive Scheduling Problem
82(10)
7.1.3 Numerical Example
92(9)
7.2 Optimal Policy for the Single Resource Case Without Preemptions
101(3)
7.3 Project State Ordering Policies
104(14)
7.3.1 Preemptive Project State Ordering Policies
104(10)
7.3.2 Non-preemptive Project State Ordering Policies
114(3)
7.3.3 Project State Ordering Priority Policies
117(1)
7.3.4 Numerical Example
117(1)
7.4 Scheduling Using Approximate Dynamic Programming
118(15)
7.4.1 Basic Idea
118(1)
7.4.2 Approximation Based on the Preemptive Problem
119(4)
7.4.3 Approximation Using Linear Function Approximation
123(10)
7.4.4 Approximation for the Non-preemptive Problem Based on Linear Function Approximation for the Preemptive Problem
133(1)
7.5 Computational Study
133(24)
7.5.1 Experimental Design
134(2)
7.5.2 Priority Policies
136(1)
7.5.3 Simulation Setup
136(1)
7.5.4 Results for the Preemptive Problem
136(4)
7.5.5 Results for the Non-preemptive Problem
140(5)
7.5.6 Performance of Linear Function Approximation
145(12)
8 Integrated Dynamic Order Acceptance and Capacity Planning
157(26)
8.1 Stochastic Dynamic Programming
157(4)
8.1.1 State Variables
157(1)
8.1.2 Decision Variables
158(1)
8.1.3 Exogenous Information Process
159(1)
8.1.4 Transition Function
159(1)
8.1.5 Objective Function
160(1)
8.2 Solution Methodology
161(7)
8.3 Computational Investigation
168(15)
8.3.1 Structure of Optimal Policies
168(6)
8.3.2 Benefit of Crashing and Flexible MPP
174(9)
9 Conclusions and Future Work
183(4)
A Abbreviations
187(2)
B Symbols
189(10)
B.1 General
189(2)
B.1.1 System
189(1)
B.1.2 Markov Decision Processes
189(1)
B.1.3 Projects and Project Types
190(1)
B.1.4 Resources and Resource Types
191(1)
B.2 Generation of Problem Instances
191(1)
B.2.1 Problem Parameters
191(1)
B.2.2 Generation Procedure
192(1)
B.3 Scheduling
192(5)
B.3.1 General
192(1)
B.3.2 Scheduling Using Priority Policies
192(1)
B.3.3 Markov Decision Process for the Non-preemptive Problem
193(1)
B.3.4 Markov Decision Process for the Preemptive Problem
194(1)
B.3.5 Optimal Policy for the Non-preemptive Problem with a Single Resource
194(1)
B.3.6 Preemptive Project State Ordering Policies
195(1)
B.3.7 Non-preemptive Project State Ordering Policies
196(1)
B.3.8 Approximate Dynamic Programming
196(1)
B.4 Order Acceptance and Capacity Planning
197(2)
Bibliography 199
Philipp Melchiors is a consultant for an Operations Research focused consulting company. Prior to his current position he worked as research and teaching assistant at the TUM School of Management, Technische Universität München. During this time he wrote his Ph.D. thesis on "Dynamic and stochastic multi-project planning".