LCS
A strand of DNA consists of a string of molecules called bases, where the possible bases are adenine, guanine, cytosine, and thymine.
Representing each of these bases by its initial letter, we can express a strand of DNA as a string over the finite set
One reason to compare two strands of DNA is to determine how “similar” the two strands are, as some measure of how closely related the two organisms are.
String Similarity
We can, and do, define similarity in many ways. For example, we can say that two DNA strands are similar:
- If one is a substring of the other.
- Alternatively, we could say that two strands are similar if the number of changes needed to turn one into the other is small.
- Yet another way to measure the similarity of strands
and is by finding a third strand in which the bases in appear in each of and . - These bases must appear in the same order, but not necessarily consecutively.
- The longer the strand
we can find, the more similar and are.
Example:
Subsequence
We formalize this last notion of similarity as the longest-common-subsequence problem. A subsequence of a given sequence is just the given sequence with zero or more elements left out.
Formally, given a sequence
, another sequence is a subsequence of if there exists a strictly increasing sequence of indices of X such that for all , we have
Example:
Common Subsequence
Given two sequences
and , we say that a sequence is a common subsequence of and if is a subsequence of both and
Example:
The sequence
Problem Statement - Longest Common Subsequence
In the longest-common-subsequence problem, we are given two sequences (input)
and wish to find a maximum length common subsequence of
Observation 1: Multiple possible LCS
If we define a set
Observation 2: Brute force is not an option
In a brute-force approach to solve the LCS problem:
- We would enumerate all subsequences of
- Check each subsequence to see whether it is also a subsequence of
- Keep track of the longest subsequence we find.
By generating all the possible sequences of
Step 1 - Characterizing the longest common subsequence
Remember, we can apply dynamic programming here if we can express the solution in a polynomial count of sub-problems.
The LCS problem has an optimal-substructure property, however, as the following theorem shows. As we shall see, the natural classes of subproblems correspond to pairs of “prefixes” of the two input sequences.
Prefix
Given a sequence
, we define the -th prefix of , for , as the prefix of of length ,
Example:
- And
is the empty sequence - Also, when
, the prefix corresponds to the whole sequence.
In general
- Where
and
Theorem 15.1: Optimal Substructure of an LCS

Let's rewrite it for analysis purposes
Let
We consider
- If
, the last characters of LCS coincide, - The last characters of LCS coincide,
- The prefix of this common sequence is the LCS of the prefixes of
and ,
- The last characters of LCS coincide,
- If
, Then: - If
, then - If
, then
- If
Demonstration
Demonstration Ad-Absurdum:
- Part 1.1:
, - Ad-absurdum, if this was not true we could build a sequence by chaining
to , resulting in which is still a subsequence of and . - But, this means there should be a subsequence longer than
, which is absurd because , so it is already an optimal solution. - Then,
is true.
- Ad-absurdum, if this was not true we could build a sequence by chaining
- Part 1.2:
- Ad-absurdum, if this was not true we could have some
and - Since we know,
, If we concat the sequences with we would get (which are both ). - But,
as it is the string concatenated with the last character. - We reach an absurd as
, and would not be an optimal solution anymore.
- Ad-absurdum, if this was not true we could have some
- Part 2.1:
- Ad-absurdum, we suppose there exists some
, it is not possible as is by definition an optimal solution of maximum length
- Ad-absurdum, we suppose there exists some
- Part 2.2:
- Ad-absurdum, we suppose there exists some
, it is not possible as is by definition an optimal solution of maximum length
- Ad-absurdum, we suppose there exists some
Conclusion
To sum up:
- Thanks to this we managed to express the
in terms of sub-problems, now we have a polynomial way to construct our solution. - The way that Theorem 15.1 characterizes the longest common subsequences tells us that an LCS of two sequences contains within it an LCS of prefixes of the two sequences.
- This means we can start from prefixes of length
, and then proceed towards ( as the last index) - Thus, the LCS problem has an optimal-substructure property. A recursive solution also has the overlapping sub-problems property, as we shall see in a moment.
Step 2 - Recursive Solution
A recursive solution
- If
- Find an
- Appending
, to this LCS yields an
- Find an
- Else, we must solve two subproblems
- Finding an
- Finding an
- The longest is an
and this exhaust all possibilities recursively.
- Finding an
Our recursive solution to the LCS problem involves establishing a recurrence for the value of an optimal solution. Let us define
We ruled out some sub-problems due to how we defined the problem and the possible solutions. We have now,
Step 3 & 4 - Bottom Up
Computing the length of an LCS to be the length of an
| Operation | BU_LCS(X, Y) -> Pair(b,c) |
|---|---|
| Input | Sequences |
| Output | Tables |
| is a 2D vector that saves the lengths of | |
| is a 2D vector that helps us construct an optimal solution | |
| Points to the table entry corresponding to the optimal sub-problem solution chosen when computing |

BU_LCS(x,y)
c[0...m+1,0...n+1]
b[1...m,1...n]
m = x.length;
n = y.length;
for (i = 0 to m): # When i = 0
c[i,0] = 0;
for (j = 1 to n): # When j = 0
c[0,j] = 0;
for (i = 1 to m):
for (j = 1 to n):
if(x[i] == y[j]): # CASE 1
c[i,j] = c[i-1,j-1] + 1;
b[i,j] = ↖;
else if (c[i-1, j] >= c[i,j-1]): # CASE 2
c[i,j] = c[i-1, j];
b[i,j] = ↑;
else: # CASE 3
c[i,j] = c[i, j - 1];
b[i,j] = ←;
return b,c;
$$
**Final Time Complexity** $T(n)= \Theta(m) + \Theta(n) + \Theta(n \cdot m) = \Theta(n \cdot m)$
* Polynomial

Now that we have found the count of an LCS, we want to display which one it could be!
### Printing
Constructing an LCS
Let's print an LCS!
* We start from the $i,j$ position and decrease eithe $i$ or $j$
* We only print if there's an oblique arrow.
* Since the recursive call happens before the print, we get to the top from the bottom
and only print at the very end.
```python
printLCSAux(X, b, i, j)
if(i > 0 && j > 0): #if not an empty string
if(b[i,j] == ↖): #if we have a common char
printLCSAux(X, b, i - 1, j - 1); #first we deal with the subproblem
print(X[i]);
else if(b[i,j] == ↑): #if we have NOT a common char
printLCSAux(X, b, i - 1, j);
else:
printLCSAux(X, b, i, j - 1);
$$
**Final Time Complexity** $T(n)= \mathcal{O}(i+j)$
* At every function call, we decrease either one of the two parameters.
```python
printLCS(X,Y)
b,c = BU_LCS(X,Y);
printLCSaux(X,b, X.length, Y.length);
$$
**Final Time Complexity** $T(n)= \Theta(n \cdot m) + \Theta(n + m)= \Theta(n \cdot m)$
* We need to go through LCS
### Improve memory
We can reduce the memory usage through two different optimizations
#### First Method
In the LCS algorithm, for example, we can eliminate the $b$ table altogether.
Given the value of $c[i,j]$, we can determine in $O(1)$ time which of
these three values was used to compute $c[i,j]$ without inspecting table $b$.
Each $c[i,j]$ entry **depends on only three other $c$ table entries**:
1. $c[i-1,j-1]$
2. $c[i-1,j]$
3. $c[i,j-1]$
Thus, we can reconstruct an $LCS$ in $\mathcal{O}(m+n)$ time using a procedure similar to printLCS.
The order here matters a lot!
```python
printLCSAux(X, c, i, j)
if(i > 0 && j > 0):
if(c[i,j] == c[i - 1,j]):
printLCSAux(X, c, i - 1, j);
else if(c[i,j] == c[i,j - 1]):
printLCSAux(X, c, i, j - 1);
else:
printLCSAux(X, c, i - 1, j - 1);
print(X[i]);
$$
Although we save $\Theta(n*m)$ space by this method, the auxiliary
space requirement for computing an LCS does not asymptotically decrease, since
we need $\Theta(n*m)$ space for the $c$ table anyway.
#### Second Method
We can, however, reduce the asymptotic space requirements for `LCS_Length`,
since it needs only two rows of table $c$ at a time:
* The row being computed,
* and the previous row
This improvement works if **we need only the length of an LCS**; if we need to reconstruct
the elements of an LCS, the smaller table does not keep enough information to
retrace our steps in $\mathcal{O}(m+n)$ time and using only $\mathcal{O}(n)$.
## Step 3 & 4 - Top Down
```python
TD_LCSAux(x, y, c, i, j)
if(c[i,j] == -1): # Problem not solved
if(i == 0 || j == 0):
c[i,j] = 0;
else if(x[i] == y[j]):
c[i,j] = TD_LCSAux(x, y, i - 1, j - 1) + 1;
else:
c[i,j] = max(TD_LCSAux(x, y, i - 1, j),
TD_LCSAux(x, y, i, j - 1));
return c[i,j];
$$
**Final Time Complexity** $T(n)= \mathcal{O}(n \cdot m)$
* This is directly proportional to the possible sub-problems
```python
TD_LCS(X, Y)
m = X.length
n = Y.length
c[0..m,0..n] = -1 #initialized with all elements equals to -1
return TD_LCSAux(X, Y, c, m, n)
$$
**Final Time Complexity** $T(n)= \Theta(n \cdot m)$
* If the strings are equivalent, we are in $\mathcal{O}(n)$ rather than $\mathcal{O}(n^{2})$
BU_LCS(x,y)
c[0...m+1,0...n+1]
b[1...m,1...n]
m = x.length;
n = y.length;
for (i = 0 to m): # When i = 0
c[i,0] = 0;
for (j = 1 to n): # When j = 0
c[0,j] = 0;
for (i = 1 to m):
for (j = 1 to n):
if(x[i] == y[j]): # CASE 1
c[i,j] = c[i-1,j-1] + 1;
b[i,j] = ↖;
else if (c[i-1, j] >= c[i,j-1]): # CASE 2
c[i,j] = c[i-1, j];
b[i,j] = ↑;
else: # CASE 3
c[i,j] = c[i, j - 1];
b[i,j] = ←;
return b,c;
$$
**Final Time Complexity** $T(n)= \Theta(m) + \Theta(n) + \Theta(n \cdot m) = \Theta(n \cdot m)$
* Polynomial

Now that we have found the count of an LCS, we want to display which one it could be!
### Printing
Constructing an LCS
Let's print an LCS!
* We start from the $i,j$ position and decrease eithe $i$ or $j$
* We only print if there's an oblique arrow.
* Since the recursive call happens before the print, we get to the top from the bottom
and only print at the very end.
```python
printLCSAux(X, b, i, j)
if(i > 0 && j > 0): #if not an empty string
if(b[i,j] == ↖): #if we have a common char
printLCSAux(X, b, i - 1, j - 1); #first we deal with the subproblem
print(X[i]);
else if(b[i,j] == ↑): #if we have NOT a common char
printLCSAux(X, b, i - 1, j);
else:
printLCSAux(X, b, i, j - 1);
$$
**Final Time Complexity** $T(n)= \mathcal{O}(i+j)$
* At every function call, we decrease either one of the two parameters.
```python
printLCS(X,Y)
b,c = BU_LCS(X,Y);
printLCSaux(X,b, X.length, Y.length);
$$
**Final Time Complexity** $T(n)= \Theta(n \cdot m) + \Theta(n + m)= \Theta(n \cdot m)$
* We need to go through LCS
### Improve memory
We can reduce the memory usage through two different optimizations
#### First Method
In the LCS algorithm, for example, we can eliminate the $b$ table altogether.
Given the value of $c[i,j]$, we can determine in $O(1)$ time which of
these three values was used to compute $c[i,j]$ without inspecting table $b$.
Each $c[i,j]$ entry **depends on only three other $c$ table entries**:
1. $c[i-1,j-1]$
2. $c[i-1,j]$
3. $c[i,j-1]$
Thus, we can reconstruct an $LCS$ in $\mathcal{O}(m+n)$ time using a procedure similar to printLCS.
The order here matters a lot!
```python
printLCSAux(X, c, i, j)
if(i > 0 && j > 0):
if(c[i,j] == c[i - 1,j]):
printLCSAux(X, c, i - 1, j);
else if(c[i,j] == c[i,j - 1]):
printLCSAux(X, c, i, j - 1);
else:
printLCSAux(X, c, i - 1, j - 1);
print(X[i]);
$$
Although we save $\Theta(n*m)$ space by this method, the auxiliary
space requirement for computing an LCS does not asymptotically decrease, since
we need $\Theta(n*m)$ space for the $c$ table anyway.
#### Second Method
We can, however, reduce the asymptotic space requirements for `LCS_Length`,
since it needs only two rows of table $c$ at a time:
* The row being computed,
* and the previous row
This improvement works if **we need only the length of an LCS**; if we need to reconstruct
the elements of an LCS, the smaller table does not keep enough information to
retrace our steps in $\mathcal{O}(m+n)$ time and using only $\mathcal{O}(n)$.
## Step 3 & 4 - Top Down
```python
TD_LCSAux(x, y, c, i, j)
if(c[i,j] == -1): # Problem not solved
if(i == 0 || j == 0):
c[i,j] = 0;
else if(x[i] == y[j]):
c[i,j] = TD_LCSAux(x, y, i - 1, j - 1) + 1;
else:
c[i,j] = max(TD_LCSAux(x, y, i - 1, j),
TD_LCSAux(x, y, i, j - 1));
return c[i,j];
$$
**Final Time Complexity** $T(n)= \mathcal{O}(n \cdot m)$
* This is directly proportional to the possible sub-problems
```python
TD_LCS(X, Y)
m = X.length
n = Y.length
c[0..m,0..n] = -1 #initialized with all elements equals to -1
return TD_LCSAux(X, Y, c, m, n)
$$
**Final Time Complexity** $T(n)= \Theta(n \cdot m)$
* If the strings are equivalent, we are in $\mathcal{O}(n)$ rather than $\mathcal{O}(n^{2})$
