Dynamic Programming Algorithms

Dynamic Programming (DP) is a method for solving complex problems by breaking them down into smaller subproblems and solving each subproblem just once. This technique is used when the solution to a problem can be built up from solutions to subproblems, and the subproblems overlap in some way.

DP is often used to optimize time or space complexity. It can also be used to solve problems with many possible solutions, where an optimal solution is required.


Dynamic Programming Example

A classic example of a problem that can be solved using DP is the Knapsack problem. In this problem, you have a set of items, each with a weight and a value, and a knapsack that can carry a maximum weight. The goal is to find the subset of items that maximize the total value while staying within the weight limit of the knapsack.

To solve this problem using DP, you would break it down into subproblems by considering each item in turn. You would then compute the maximum value that can be achieved with each possible subset of items up to that point, and use this information to compute the maximum value for the next item. By building up the solution in this way, you can avoid recomputing the same subproblems multiple times.


How Dynamic Programming Works

Dynamic Programming works by solving subproblems in a recursive manner, and then using memoization to store the results of these subproblems so that they can be reused later. The general steps for solving a DP problem are as follows:

  1. Identify the subproblems that need to be solved.
  2. Define a recursive function to solve each subproblem.
  3. Use memoization to store the results of each subproblem so that they can be reused later.
  4. Build up the solution to the main problem by combining the results of the subproblems.


Fibonacci Series Problem Using Dynamic Programming

The Fibonacci series is a classic example of a problem that can be solved using DP. The Fibonacci sequence is defined as follows:

fib(n) = fib(n-1) + fib(n-2)
where,
fib(0) = 0
fib(1) = 1
Nth term of fibonacci series is calculated by adding last two terms(N-1th and N-2th) of the fibonacci series. To solve this problem using DP, we can use memoization to store the results of the subproblems. Here's the step-by-step dynamic programming algorithm.

  • Step 1: Initialize an array to store the results of the subproblems.

  • Step 2: Define a recursive function to compute the nth Fibonacci number.

  • Step 3: Check if the nth Fibonacci number has already been computed and stored in the memoization array. If it has, return the stored value.

  • Step 4: If the nth Fibonacci number has not yet been computed, compute it recursively by calling the Fibonacci function for n-1 and n-2.

  • Step 5: Store the computed value in the memoization array.

  • Step 6: Return the computed value.
#include<stdio.h>
#define MAX 100

int fib[MAX];

int fibonacci(int n) {
    if (n <= 1) {
        return n;
    } else if (fib[n] != -1) {
        return fib[n];
    } else {
        fib[n] = fibonacci(n-1) + fibonacci(n-2);
        return fib[n];
    }
}

int main() {
    int n;
    printf("Enter the value of n: ");
    scanf("%d",&n);

    for (int i = 0; i < MAX; i++) {
        fib[i] = -1;
    }

    printf("The %dth term of Fibonacci series is: %d", 
        n, fibonacci(n));

    return 0;
}
Output
Enter the value of n: 5
The 5th term of Fibonacci series is: 5

Difference between Divide and Conquer vs Dynamic Programming

  1. Divide and Conquer and Dynamic Programming are two techniques for solving complex problems, but they differ in their approach.
  2. Divide and Conquer involves breaking a problem into smaller subproblems, solving each subproblem independently, and then combining the results to obtain the solution to the original problem. This technique is often used when the subproblems are completely independent of each other.
  3. Dynamic Programming, on the other hand, involves breaking a problem into smaller subproblems, solving each subproblem once, and then using memoization to store the results so that they can be reused later. This technique is often used when the subproblems overlap in some way.

Advantages and Disadvantages of Dynamic Programming

Advantages of Dynamic Programming

  • Optimal solutions: Dynamic Programming guarantees that the solution obtained is optimal. This means that the solution is the best possible solution to the problem at hand.

  • Time and Space Optimization: DP can optimize both time and space complexity by avoiding redundant computations through the use of memoization. This can result in significant performance improvements over brute-force or recursive approaches.

  • Applicable to wide range of problems: DP can be applied to a wide range of problems, from simple to complex.

  • Easy to implement: DP is relatively easy to implement, especially compared to other optimization techniques such as linear programming.

Disadvantages of Dynamic Programming

  • Requires optimal substructure: Dynamic Programming requires that the problem has optimal substructure, meaning that the optimal solution to a problem can be built from optimal solutions to its subproblems.

  • May not be efficient for smaller problems: DP may not be the best approach for small problems, as the overhead associated with memoization can outweigh the benefits of avoiding redundant computations.

  • May be hard to conceptualize: Some problems may be difficult to conceptualize in terms of subproblems, making it difficult to apply DP.

  • May require significant memory usage: DP can require significant memory usage, especially when solving problems with large input sizes.

Common Examples of Dynamic Programming Algorithm

Below mentioned examples showcase the versatility of Dynamic Programming in solving a diverse range of optimization problems efficiently by breaking them down into smaller, overlapping subproblems and storing the solutions for reuse.
  • Fibonacci Sequence: Dynamic Programming can optimize the computation of Fibonacci numbers by storing previously calculated values to avoid redundant calculations.

  • Longest Common Subsequence: LCS is a classic problem that involves finding the longest subsequence common to two sequences, often implemented using dynamic programming for efficiency.

  • Floyd-Warshall Algorithm: Dynamic Programming is used in algorithms like Floyd-Warshall to find the shortest paths between all pairs of vertices in a weighted graph.

  • Knapsack Problem: Dynamic Programming is applied to solve both 0/1 Knapsack and Fractional Knapsack problems, optimizing the selection of items to maximize the total value within capacity constraints.

  • Matrix Chain Multiplication: The problem involves finding the most efficient way to multiply a sequence of matrices, and dynamic programming can be employed to minimize the number of scalar multiplications.

  • Levenshtein Distance: Dynamic Programming is used to calculate the minimum number of operations (insertions, deletions, substitutions) required to transform one string into another.

  • Rod Cutting Problem: This problem involves cutting a rod into pieces to maximize the total value, and dynamic programming can be used to find the optimal cutting strategy.

  • Longest Increasing Subsequence: The LIS problem asks for the length of the longest subsequence of a given sequence such that elements are in increasing order. Dynamic programming provides an efficient solution.

  • Bellman-Ford Algorithm: Dynamic Programming, in the form of the Bellman-Ford algorithm, can handle graphs with negative edge weights, finding the shortest paths from a single source.