Rod Cutting
Serling Enterprises buys long steel rods and cuts them into shorter rods, which it then sells. Each cut is free (not relevant) and the cost of the rods is directly proportional to its length.
The management of Serling Enterprises wants to know the best way to cut up the rods.
Problem Statement
Input:
- A rod
of length inches - A table of prices
with
Output:
- Determine the maximum revenue
obtainable by cutting up the rod and selling the pieces. - Note that if the price
for a rod of length is large enough, an optimal solution may require no cutting at all.
- Note that if the price
Example:


Step 1 - Characterize the structure of an optimal solution
How many ways can be used to cut a rod of length
- Since from
, we can either cut or not (two choices). - This means a brute force approach will imply an exponential
.
The maximum return for a rod of length
Where
The optimal solution can be defined as the combination of optimal sub-solutions. Here we find the property of the optimal substructure.
Step 2 - Recursively define the value of an optimal solution
Change of perspective!
- In a related, but slightly simpler, way to arrange a recursive structure for the rod-cutting problem, we view a decomposition as consisting of a first piece of length
cut off the left-hand end, and then a right-hand remainder of length . - Only the remainder, and not the first piece, may be further divided
We may view every decomposition of a length
Where:
, the first piece of length , the remainder to be further divided , the position of the cut , there is no cut.
Step 3 - Find the state to memorize and choose an optimal data structure
We already have a vector
Step 4 - pt.1 Sub-Optimal Solution
Compute the value of an optimal solution.
| Operation | Cut_Rod(Vector p[], Integer n) -> Integer |
|---|---|
| Input | A vector |
| Output | The maximum revenue |
| Notes | MIN_INT or q |
Cut_Rod(p,n)
if(n == 0):
return 0
else:
#or MIN_INT of neg_infinity()
q = -1
# summation(from i = 1, to n)
for(i=1 to n):
# 1 + T(n-i)
q = max(q, p[i] + cut_rod(p, n-i));
return q;
$$
### Final Time Complexity
$T(n)$ is the total number of calls to `Cut_Rod` when the second parameter is $n$
$$
T(n) = \left\{\begin{matrix}
1 & n=0 \\
1+\sum_{i=1}^{n}T(n-i) & n >0 \\
\end{matrix}\right.
$$
So we can change the variable:
$$T(n) = 1+\sum_{i=1}^{n}T(n-i) \stackrel{j=n-i} \Longrightarrow 1+\sum_{j=0}^{n-1}T(j) = 2^{n} $$
Where the initial $1$ is for the call at the root, and the term $T(j)$ counts the number of calls
(including recursive calls) due to the call `Cut_Rod(p, n-i)` where $j=n-i$.
$$T(n) = \Theta(2^{n})$$
#### Demonstration
By Induction on $n$:
* Base Case: $n=0 \Rightarrow T(n=0) = 1 = 2^{0} = 2^{n}$, by definition
* Inductive Hypothesis: We assume this is true for $n$
* Inductive Step: We demonstrate it for $n+1$, namely we ask $T(n+1) = 2^{n+1}$ ?
* $T(n+1) = 1+\sum_{j=0}^{n}T(j) = 1+\sum_{j=0}^{n-1}T(j) + T(n)$, trivially
* $1+\sum_{j=0}^{n-1}T(j) + T(n) = T(n) + T(n)$, by definition
* $T(n) = 2^{n}$, by inductive hypothesis
* $T(n) + T(n) = 2^{n} + 2^{n} = 2 \cdot 2^{n}$, trivially
* $2 \cdot 2^{n} = 2^{n+1}$
$T(n) \text{ has exponential time}$
Why is `Cut_Rod` so inefficient? The problem is that `Cut_Rod` calls itself recursively over and over again with the same
parameter values; it solves the same sub-problems repeatedly.
Example: $n=4$

By observing the tree, we notice
* We have as many leaves as the ways we can cut the rod, namely $2^{n-1}$
* Which means every path is a different way to cut the rod
* The same sub-problems are solved multiple times
* But the distinct sub-problems are not that many, in this case they are $n$
* Then, we can memorize the solutions to obtain polynomial complexity
## Step 5 - pt.2 Top Down Optimal Solution
If we save the solution of $n$ sub-problems we can improve the complexity from exponential to polynomial.
* We avoid repeated computations by saving the solution
* Through a vector $r$ which saves the maximum revenues for each length $i$
* $p[i] \geq 0$
* Distinct sub-problems are very rare (usually are $n$ )
* If the count of sub-problems is polynomial, then each one of them can be solved in a polynomial time.
```python
Memoized_CR_Aux (p, j, r)
if(r[j] < 0): # If we have not yet solved this case
if(j == 0):
r[j] = 0
else:
q = -1 # Max Revenue
for(i = 1 to j):
q = max(q, p[i]+ Memoized_CR_Aux(p, j-i, r))
r[j] = q;
return r[j] #if r[j]>= 0 it's already solved
$$
```python
Memoized_Cut_Rod(p, n)
r[n] # Initialize A vector with indexes 0 to n where the maximum revenues are saved
for(i = 0 to n):
r[i] = -1 #initialize the array so that the cells are negative
return Memoized_CR_Aux(p, n, r)
$$
### Final Time Complexity
**Final Time Complexity**: $T(n) = \sum_{j=1}^{n}(j) = \frac{n(n+1)}{2} = \Theta(n^{2})$
* The for cycle in `Memoized_Cut_Rod` is executed only once per sub-problem
* A call for a problem that has already been solved has constant time.
* So in the worst case we reach the `else` branch only once per sub-problem.
#### Demonstration
The for cycle in `Memoized_Cut_Rod` is executed only once per sub-problem
* A call for a problem that has already been solved has constant time.
* So in the worst case we reach the `else` branch only once per sub-problem.
To solve a problem of size $n$ the cycle executes $n$ iterations
$$T(n) = \Theta(1) + \sum_{j=1}^{n}j \Theta(1) = \Theta(\sum_{j=1}^{n}j)= \frac{n(n+1)}{2} = \Theta(n^{2})$$
## Step 5 - pt.3 Bottom-Up Optimal Solution
We sort the sub-problems in a non-decreasing order.
```python
BU_Cut_Rod(p, n)
r[0...n] # A vector of revenues with indexes 0 to n
#initialization even though is omitted in pseudocode
for (i = 0 to n):
r[i] = -1
r[0] = 0
for(j = 1 to n):
q = -1
for(i = 1 to j): #solve problems
q = max(q, p[i]+r[j-i])
r[j] = q
return r[n]
$$
### Final Time Complexity
**Final Time Complexity**: $T(n) = \sum_{j=1}^{n}j \cdot \Theta(1)= \Theta(\frac{n(n+1)}{2}) = \Theta(n^{2})$
Which is better since it avoids recursive calls.
## Step 5 - pt.4 Bottom-Up Optimized Optimal Solution
Here is an extended version of `Bottom_Up_Cut_Rod` that computes, for each rod size $j$
* Not only the maximum revenue $r[j]$
* Vector $r[0,n]$ containing the revenues for length $0 \leq i \leq n$
* But also $s[j]$ , the optimal size of the first piece to cut off:
* Vector $s[1 \ldots n]$ containing the positions of the optimal cuts
```python
BU_Cut_Rod2(p, n)
r[0,n], s[1,n]
#initialization
for (i = 1 to n):
r[i] = -1
r[0] = 0
for(j = 1 to n): #We compute from smaller problems
q = -1 # Max revenue so far
for(i = 1 to j):
if(q < p[i] + r[j-i]): #if the max improves
q = p[i] + r[j-i] #new max
s[j] = i #where to cut for optimality
r[j] = q
return(r,s);
$$
```python
Print_Cut_Rod_Sol(p,n)
r,s = BU_Cut_Rod2(p, n); #unpacking
while(n > 0):
print(s[n]);
n -= s[n];
$$
### Final Time Complexity
**Final Time Complexity**: $T(n) = \Theta(n^{2}) + \mathcal{O}(n) = \Theta(n^{2})$Cut_Rod(p,n)
if(n == 0):
return 0
else:
#or MIN_INT of neg_infinity()
q = -1
# summation(from i = 1, to n)
for(i=1 to n):
# 1 + T(n-i)
q = max(q, p[i] + cut_rod(p, n-i));
return q;
$$
### Final Time Complexity
$T(n)$ is the total number of calls to `Cut_Rod` when the second parameter is $n$
$$
T(n) = \left\{\begin{matrix}
1 & n=0 \\
1+\sum_{i=1}^{n}T(n-i) & n >0 \\
\end{matrix}\right.
$$
So we can change the variable:
$$T(n) = 1+\sum_{i=1}^{n}T(n-i) \stackrel{j=n-i} \Longrightarrow 1+\sum_{j=0}^{n-1}T(j) = 2^{n} $$
Where the initial $1$ is for the call at the root, and the term $T(j)$ counts the number of calls
(including recursive calls) due to the call `Cut_Rod(p, n-i)` where $j=n-i$.
$$T(n) = \Theta(2^{n})$$
#### Demonstration
By Induction on $n$:
* Base Case: $n=0 \Rightarrow T(n=0) = 1 = 2^{0} = 2^{n}$, by definition
* Inductive Hypothesis: We assume this is true for $n$
* Inductive Step: We demonstrate it for $n+1$, namely we ask $T(n+1) = 2^{n+1}$ ?
* $T(n+1) = 1+\sum_{j=0}^{n}T(j) = 1+\sum_{j=0}^{n-1}T(j) + T(n)$, trivially
* $1+\sum_{j=0}^{n-1}T(j) + T(n) = T(n) + T(n)$, by definition
* $T(n) = 2^{n}$, by inductive hypothesis
* $T(n) + T(n) = 2^{n} + 2^{n} = 2 \cdot 2^{n}$, trivially
* $2 \cdot 2^{n} = 2^{n+1}$
$T(n) \text{ has exponential time}$
Why is `Cut_Rod` so inefficient? The problem is that `Cut_Rod` calls itself recursively over and over again with the same
parameter values; it solves the same sub-problems repeatedly.
Example: $n=4$

By observing the tree, we notice
* We have as many leaves as the ways we can cut the rod, namely $2^{n-1}$
* Which means every path is a different way to cut the rod
* The same sub-problems are solved multiple times
* But the distinct sub-problems are not that many, in this case they are $n$
* Then, we can memorize the solutions to obtain polynomial complexity
## Step 5 - pt.2 Top Down Optimal Solution
If we save the solution of $n$ sub-problems we can improve the complexity from exponential to polynomial.
* We avoid repeated computations by saving the solution
* Through a vector $r$ which saves the maximum revenues for each length $i$
* $p[i] \geq 0$
* Distinct sub-problems are very rare (usually are $n$ )
* If the count of sub-problems is polynomial, then each one of them can be solved in a polynomial time.
```python
Memoized_CR_Aux (p, j, r)
if(r[j] < 0): # If we have not yet solved this case
if(j == 0):
r[j] = 0
else:
q = -1 # Max Revenue
for(i = 1 to j):
q = max(q, p[i]+ Memoized_CR_Aux(p, j-i, r))
r[j] = q;
return r[j] #if r[j]>= 0 it's already solved
$$
```python
Memoized_Cut_Rod(p, n)
r[n] # Initialize A vector with indexes 0 to n where the maximum revenues are saved
for(i = 0 to n):
r[i] = -1 #initialize the array so that the cells are negative
return Memoized_CR_Aux(p, n, r)
$$
### Final Time Complexity
**Final Time Complexity**: $T(n) = \sum_{j=1}^{n}(j) = \frac{n(n+1)}{2} = \Theta(n^{2})$
* The for cycle in `Memoized_Cut_Rod` is executed only once per sub-problem
* A call for a problem that has already been solved has constant time.
* So in the worst case we reach the `else` branch only once per sub-problem.
#### Demonstration
The for cycle in `Memoized_Cut_Rod` is executed only once per sub-problem
* A call for a problem that has already been solved has constant time.
* So in the worst case we reach the `else` branch only once per sub-problem.
To solve a problem of size $n$ the cycle executes $n$ iterations
$$T(n) = \Theta(1) + \sum_{j=1}^{n}j \Theta(1) = \Theta(\sum_{j=1}^{n}j)= \frac{n(n+1)}{2} = \Theta(n^{2})$$
## Step 5 - pt.3 Bottom-Up Optimal Solution
We sort the sub-problems in a non-decreasing order.
```python
BU_Cut_Rod(p, n)
r[0...n] # A vector of revenues with indexes 0 to n
#initialization even though is omitted in pseudocode
for (i = 0 to n):
r[i] = -1
r[0] = 0
for(j = 1 to n):
q = -1
for(i = 1 to j): #solve problems
q = max(q, p[i]+r[j-i])
r[j] = q
return r[n]
$$
### Final Time Complexity
**Final Time Complexity**: $T(n) = \sum_{j=1}^{n}j \cdot \Theta(1)= \Theta(\frac{n(n+1)}{2}) = \Theta(n^{2})$
Which is better since it avoids recursive calls.
## Step 5 - pt.4 Bottom-Up Optimized Optimal Solution
Here is an extended version of `Bottom_Up_Cut_Rod` that computes, for each rod size $j$
* Not only the maximum revenue $r[j]$
* Vector $r[0,n]$ containing the revenues for length $0 \leq i \leq n$
* But also $s[j]$ , the optimal size of the first piece to cut off:
* Vector $s[1 \ldots n]$ containing the positions of the optimal cuts
```python
BU_Cut_Rod2(p, n)
r[0,n], s[1,n]
#initialization
for (i = 1 to n):
r[i] = -1
r[0] = 0
for(j = 1 to n): #We compute from smaller problems
q = -1 # Max revenue so far
for(i = 1 to j):
if(q < p[i] + r[j-i]): #if the max improves
q = p[i] + r[j-i] #new max
s[j] = i #where to cut for optimality
r[j] = q
return(r,s);
$$
```python
Print_Cut_Rod_Sol(p,n)
r,s = BU_Cut_Rod2(p, n); #unpacking
while(n > 0):
print(s[n]);
n -= s[n];
$$
### Final Time Complexity
**Final Time Complexity**: $T(n) = \Theta(n^{2}) + \mathcal{O}(n) = \Theta(n^{2})$