【正文】
so that f1 ? f2 ? ... ? fn. Compute p(1), p(2), …, p(n) for j = 1 to n M[j] = empty M[0] = 0 MComputeOpt(n) { if (M[n] is empty) M[n] = max(vn + MComputeOpt(p(n)), MComputeOpt(n1)) return M[n] } global array Weighted Interval Scheduling: Memoization Memoization. Store results of each subproblem in a cache。 lookup as needed. 55 Weighted Interval Scheduling: Running Time Claim. Memoized version of algorithm takes O(n log n) time. ? Sort by finish time: O(n log n). ? Computing p(?) : O(n) after sorting by start time. ? MComputeOpt(j): each invocation takes O(1) time and either – (i) returns an existing value M[j] – (ii) fills in one new entry M[j] and makes two recursive calls ? Progress measure ? = nonempty entries of M[]. – initially ? = 0, throughout ? ? n. – (ii) increases ? by 1 ? at most n recursive calls. ? Overall running time of MComputeOpt(n) is O(n). ? Remark. O(n) if jobs are presorted by start and finish times. 56 Weighted Interval Scheduling: BottomUp Bottomup dynamic programming. Unwind recursion. Input: n, s1,…,sn , f1,…,fn , v1,…,vn Sort jobs by finish times so that f1 ? f2 ? ... ? fn. Compute p(1), p(2), …, p(n) IterativeComputeOpt { M[0] = 0 for j = 1 to n M[j] = max(vj + M[p(j)], M[j1]) } 57 Dynamic Programming Recall that the main idea of dynamic programming is to save the running time by avoiding puting same subproblems. What should be most important step of designing dynamic programming algorithms? To find a good way to separate the problem into may overlapping subproblems. 58 Knapsack Problem Knapsack problem. ? Given n objects and a knapsack. ? Item i weighs wi 0 kilograms and has value vi 0. ? Knapsack has capacity of W kilograms. ? Goal: fill knapsack so as to maximize total value. Ex: { 3, 4 } has value 40. Greedy: repeatedly add item with maximum ratio vi / wi. Ex: { 5, 2, 1 } achieves only value = 35 ? greedy not optimal. 1 Value 18 22 28 1 Weight 5 6 6 2 7 Item 1 3 4 5 2 W = 11 59 Dynamic Programming: False Start Def. OPT(i) = max profit subset of items 1, …, i. ? Case 1: OPT does not select item i. – OPT selects best of { 1, 2, …, i1 } ? Case 2: OPT selects item i. – accepting item i does not immediately imply that we will have to reject other items – without knowing what other items were selected before i, we don39。t even know if we have enough room for i Conclusion. Need more subproblems! 60 Dynamic Programming: Adding a New Variable Def. OPT(i, w) = max profit subset of items 1, …, i with weight limit w. ? Case 1: OPT does not select item i. – OPT selects best of { 1, 2, …, i1 } using weight limit w ? Case 2: OPT selects item i. – new weight limit = w – wi – OPT selects best of { 1, 2, …, i–1 } using this new weight limit ??O P T ( i , w ) ?0 i f i ? 0O P T ( i ? 1 , w ) i f w i ? wm a x O P T ( i ? 1 , w ), v i ? O P T ( i ? 1 , w ? w i )? ? o t h e r w i s e??????????61 Input: n, w1,…,wN, v1,…,vN for w = 0 to W M[0, w] = 0 for i = 1 to n for w = 1 to W if (wi w) M[i, w] = M[i1, w] else M[i, w] = max {M[i1, w], vi + M[i1, wwi ]} return M[n, W] Knapsack Problem: BottomUp Knapsack. Fill up an nbyW array. 62 Knapsack Algorithm n + 1 1 Value 18 22 28 1 Weight 5 6 6 2 7 Item 1 3 4 5 2 ? { 1, 2 } { 1, 2, 3 } { 1, 2, 3, 4 } { 1 } { 1, 2, 3, 4, 5 } 0 0 0 0 0 0 0 1 0 1 1 1 1 1 2 0 6 6 6 1 6 3 0 7 7 7 1 7 4 0 7 7 7 1 7 5 0 7 18 18 1 18 6 0 7 19 22 1 22 7 0 7 24 24 1 28 8 0 7 25 28 1 29 9 0 7 25 29 1 34 10 0 7 25 29 1 34 11 0 7 25 40 1 40 W + 1 W = 11 OPT: { 4, 3 } value = 22 + 18 = 40 63 Knapsack Problem: Running Time Running time. ?(n W). ? Not polynomial in input size! ? Pseudopolynomial. ? Decision version of Knapsack is NPplete. Knapsack approximation algorithm. There exists a polynomial algorithm that produces a feasible solution that has value within % of optimum.