AfterAcademy Tech
•
07 Jan 2020

The Idea for this blog is to discuss the new algorithm design technique which is called Dynamic Programming. This is an important approach to problem-solving in computer science where we use the idea of time-memory trade-off to improve efficiency. Even many tech companies like to ask DP questions in their interviews. In this blog, we will be exploring the following concepts items related to DP:
Let’s first discuss the problem of finding nth Fibonacci.
Problem Statement: Find nth Fibonacci number where a term in Fibonacci is represented as
F(n) = F(n-1) + F(n-2), where F(0) = 0 and F(1) = 1
We can solve the problem recursively with the help of the above recurrence relation. Here we are solving the problem of size n using the solution of sub-problem of size (n-1) and size (n-2), where F(0) and F(1) are the base cases.
Pseudo Code
int fib(n)
{
if(n == 0)
return 0
else if(n == 1)
return 1
else
return fib(n-1) + fib(n-2)
}
Complexity Analysis
The above code looks simple in the first look but it is really inefficient. Let's analyse it and understand the reason for its inefficiency.
Here is the recursion tree diagram

In the recursion tree method, The total number of recursive calls = Total number of nodes in the recursion tree.
Similarly, if we have k level in the recursion tree, then the total number of operations performed by recursion = L1 + L2 + L3 ......Lk, where any Li is the total number of operations at the ith level of the recursion tree.
If we observe closely then we can list down the following things:
Total number of recursive calls = 1 + 2 + 4 + 8 ...+ 2^cn
= (2^cn - 1) / (2 - 1) (Using the summation formula of geometric series)
= O(2^n)
Time Complexity = O(2^n). This is an inefficient algorithm because time complexity is growing exponentially with respect to the input size(n). Overall, It will take a very long time to generate output for a small value of n like 30 or 50 or 70.
Critical question: why the time complexity of this recursive solution is growing exponentially?
Let’s explore the reason with the help of the recursion tree for finding the 5th Fibonacci.

From the above diagram, we are solving the same sub-problems again and again during the recursion i.e. Fib(3) is calculated 2 times, Fib(2) is calculated 3 times and so on. If we further increase the value of n then repeated sub-problems will also increase. Because of the repeated solution of the same sub-problem, our time complexity is in the exponential order of n. Note: There are only n+1 different subproblems in the above recursion. (Think!)
Critical Question: Can we stop the repeated computation and improve the efficiency of the solution?
Rather than computing the same sub-problem, again and again, we can store the solution of these sub-problems in some memory, when we encounter this sub-problem first. When this sub-problem appears again during the solution of other sub-problem, we will return the already stored solution in the memory. This is a wonderful idea of Time-Memory Trade-Off, where we are using extra space to improve the time complexity. (Think!)
Let's discuss the efficient solution using the dynamic programming approach. There are two different ways of solving Dynamic programming problems:
Let's understand these two terms:
Solution Steps
Pseudo Code
Initaize an array F[N+1] with negative values.
int fib(N)
{
if( F[N] < 0 )
{
if( N <= 1 )
F[N] = N
else
F[N] = fib(N-1) + fib(N-2)
}
return F[N]
}
Complexity Analysis
The total number of recursive calls = 2N-1, because we are avoiding recomputations and solving N+1 sub-problems only once. (Think!)
Time Complexity = O(N).
Space Complexity: O(N) for an extra table to store the solution of the subproblems.
Critical observation
Recursion is coming top-down from the larger problem of size n to the base case of size 0 and 1. But it is storing the results in a bottom-up manner where F[0] and F[1] are the first two values which get stored in the table.
Order of recursive calls: fib(n)-> fib(n-1)-> fib(n-2) ...-> fib(i)-> fib(i-1)-> fib(i-2)...-> fib(2)-> fib(1)-> fib(0)
Order of storing the results in the table: F[0]-> F[1]-> F[2] ...-> F[i]-> F[i-1]-> F[i-2]...-> F[n-2]-> F[n-1] ->F[n]
If you draw the recursion tree for n=5, then it looks like this

Critical question: So rather than using the complex recursive approach, can we just use a simple iterative approach to store the results in the table? or can we build the solution from base case of size 0 and 1 to the larger problem of size n?
In this approach, we are solving the smaller problems first and then combining the results of smaller problems to get the solution of the larger problem of the size n.
Solution steps
1. Define Table Structure and Size: To store the solution of smaller sub-problems in a bottom-up approach, we need to define the table structure and its size.
2. Initialize the table with the base case: We can initialize the table by using the base cases. This could help us to fill the table and build the solution for the larger sub-problem. F[0] = 0 and F[1] = 1
3. Iterative Structure to fill the table: We should define the iterative structure to fill the table by using the recursive structure of the recursive solution.
Recursive Structure: fib(n) = fib(n-1) + fib(n-2)
Iterative structure: F[i] = F[i-1] + F[i-2]
4. Termination and returning final solution: After storing the solution in a bottom-up manner, our final solution gets stored at the last Index of the array i.e. return F[N].
Pseudo Code
int F[N+1] // array(memory to store results)
int fib(int N)
{
F[0] = 0 // base cases
F[1] = 1 // base cases
for( i = 2 to N )
F[i] = F[i-1] + F[i-2]
return F[N]
}
Complexity Analysis
We are using a table of size N+1 and running only one loop to fill the table. Time Complexity = O(N), Space Complexity = O(N)
Happy coding! Enjoy Algorithms.
AfterAcademy Tech
Divide and Conquer and Dynamic Programming are two most used problem-solving skills in Data Structures. In this blog, we will see the similarities and difference between Dynamic Programming and Divide-and-Conquer approaches.

AfterAcademy Tech
These are two very useful and commonly used algorithmic paradigms for optimization and we shall compare the two in this blog and see when to use which approach.

AfterAcademy Tech
The repeated execution of some groups of code statements in a program is called iteration. We will discuss the idea of iteration in detail in this blog.

AfterAcademy Tech
In this blog, we will see how to get started with Competitive Programming. Competitive Programming is very important because in every interview you will be asked to write some code. So, let's learn together.
