## Follow these steps to solve any Dynamic Programming interview problem

by Nikola Otasevic

Despite having significant experience building software products, many engineers feel jittery at the thought of going through a coding interview that focuses on algorithms. I’ve interviewed hundreds of engineers at Refdash , Google, and at startups I’ve been a part of, and some of the most common questions that make engineers uneasy are the ones that involve Dynamic Programming (DP).

Many tech companies like to ask DP questions in their interviews. While we can debate whether they’re effective in evaluating someone’s ability to perform in an engineering role, DP continues to be an area that trips engineers up on their way to finding a job that they love.

## Dynamic Programming — Predictable and Preparable

One of the reasons why I personally believe that DP questions might not be the best way to test engineering ability is that they’re predictable and easy to pattern match. They allow us to filter much more for preparedness as opposed to engineering ability.

These questions typically seem pretty complex on the outside, and might give you an impression that a person who solves them is very good at algorithms. Similarly, people who may not be able to get over some mind-twisting concepts of DP might seem pretty weak in their knowledge of algorithms.

The reality is different, and the biggest factor in their performance is preparedness. So let’s make sure everyone is prepared for it. Once and for all.

## 7 Steps to solve a Dynamic Programming problem

In the rest of this post, I will go over a recipe that you can follow to figure out if a problem is a “DP problem”, as well as to figure out a solution to such a problem. Specifically, I will go through the following steps:

## Sample DP Problem

For the purpose of having an example for abstractions that I am going to make, let me introduce a sample problem. In each of the sections, I will refer to the problem, but you could also read the sections independently of the problem.

## Problem statement:

In this problem, we’re on a crazy jumping ball, trying to stop, while avoiding spikes along the way.

## Here are the rules:

1) You’re given a flat runway with a bunch of spikes in it. The runway is represented by a boolean array which indicates if a particular (discrete) spot is clear of spikes. It is True for clear and False for not clear.

Example array representation:

2) You’re given a starting speed S. S is a non-negative integer at any given point, and it indicates how much you will move forward with the next jump.

3) Every time you land on a spot, you can adjust your speed by up to 1 unit before the next jump.

4) You want to safely stop anywhere along the runway (does not need to be at the end of the array). You stop when your speed becomes 0. However, if you land on a spike at any point, your crazy bouncing ball bursts and it’s game over.

The output of your function should be a boolean indicating whether we can safely stop anywhere along the runway.

## Step 1: How to recognize a Dynamic Programming problem

First, let’s make it clear that DP is essentially just an optimization technique. DP is a method for solving problems by breaking them down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions. The next time the same subproblem occurs, instead of recomputing its solution, you simply look up the previously computed solution. This saves computation time at the expense of a (hopefully) modest expenditure in storage space.

Recognizing that a problem can be solved using DP is the first and often the most difficult step in solving it. What you want to ask yourself is whether your problem solution can be expressed as a function of solutions to similar smaller problems.

In the case of our example problem, given a point on the runway, a speed, and the runway ahead, we could determine the spots where we could potentially jump next. Furthermore, it seems that whether we can stop from the current point with the current speed depends only on whether we could stop from the point we choose to go to next.

That is a great thing, because by moving forward, we shorten the runway ahead and make our problem smaller. We should be able to repeat this process all the way until we get to a point where it is obvious whether we can stop.

Recognizing a Dynamic Programming problem is often the most difficult step in solving it. Can the problem solution be expressed as a function of solutions to similar smaller problems?

## Step 2: Identify problem variables

Now we have established that there is some recursive structure between our subproblems. Next, we need to express the problem in terms of the function parameters and see which of those parameters are changing.

Typically in interviews, you will have one or two changing parameters, but technically this could be any number. A classic example of a one-changing-parameter problem is “determine an n-th Fibonacci number”. Such an example for a two-changing-parameters problem is “Compute edit distance between strings”. If you’re not familiar with these problems, don’t worry about it.

A way to determine the number of changing parameters is to list examples of several subproblems and compare the parameters. Counting the number of changing parameters is valuable to determine the number of subproblems we have to solve. It’s also important in its own right in helping us strengthen the understanding of the recurrence relation from step 1.

In our example, the two parameters that could change for every subproblem are:

One could say that the runway ahead is changing as well, but that would be redundant considering that the entire non-changing runway and the position (P) carry that information already.

Now, with these 2 changing parameters and other static parameters, we have the complete description of our sub-problems.

Identify the changing parameters and determine the number of subproblems.

## Step 3: Clearly express the recurrence relation

This is an important step that many rush through in order to get into coding. Expressing the recurrence relation as clearly as possible will strengthen your problem understanding and make everything else significantly easier.

Once you figure out that the recurrence relation exists and you specify the problems in terms of parameters, this should come as a natural step. How do problems relate to each other? In other words, let’s assume that you have computed the subproblems. How would you compute the main problem?

Here is how we think about it in our sample problem:

Because you can adjust your speed by up to 1 before jumping to the next position, there are only 3 possible speeds, and therefore 3 spots in which we could be next.

More formally, if our speed is S, position P, we could go from (S, P) to:

If we can find a way to stop in any of the subproblems above, then we can also stop from (S, P). This is because we can transition from (S, P) to any of the above three options.

This is typically a fine level of understanding of the problem (plain English explanation), but you sometimes might want to express the relation mathematically as well. Let’s call a function that we’re trying to compute canStop. Then:

canStop(S, P) = canStop(S, P + S) || canStop(S — 1, P + S — 1) || canStop(S + 1, P + S + 1)

Woohoo, it seems like we have our recurrence relation!

Recurrence relation: Assuming you have computed the subproblems, how would you compute the main problem?

## Step 4: Identify the base cases

A base case is a subproblem that doesn’t depend on any other subproblem. In order to find such subproblems, you typically want to try a few examples, see how your problem simplifies into smaller subproblems, and identify at what point it cannot be simplified further.

The reason a problem cannot be simplified further is that one of the parameters would become a value that is not possible given the constraints of the problem.

In our example problem, we have two changing parameters, S and P. Let’s think about what possible values of S and P might not be legal:

Sometimes it can be a little challenging to convert assertions that we make about parameters into programmable base cases. This is because, in addition to listing the assertions if you want to make your code look concise and not check for unnecessary conditions, you also need to think about which of these conditions are even possible.

In our example:

## Step 5: Decide if you want to implement it iteratively or recursively

The way we talked about the steps so far might lead you to think that we should implement the problem recursively. However, everything that we’ve talked about so far is completely agnostic to whether you decide to implement the problem recursively or iteratively. In both approaches, you would have to determine the recurrence relation and the base cases.

To decide whether to go iteratively or recursively, you want to carefully think about the trade-offs .

Stack overflow issues are typically a deal breaker and a reason why you would not want to have recursion in a (backend) production system. However, for the purposes of the interview, as long as you mention the trade-offs, you should typically be fine with either of the implementations. You should feel comfortable implementing both.

In our particular problem, I implemented both versions. Here is python code for that: A recursive solution: (original code snippets can be found here )

An iterative solution: (original code snippets can be found here )

Memoization is a technique that is closely associated with DP. It is used for storing the results of expensive function calls and returning the cached result when the same inputs occur again.

Why are we adding memoization to our recursion? We encounter the same subproblems which, without memoization, are computed repeatedly. Those repetitions very often lead to exponential time complexities.

In recursive solutions, adding memoization should feel straightforward. Let’s see why. Remember that memoization is just a cache of the function results. There are times when you want to deviate from this definition in order to squeeze out some minor optimizations, but treating memoization as a function result cache is the most intuitive way to implement it.

This means that you should:

Here is the code from above with added memoization (added lines are highlighted): (original code snippets can be found here )

In order to illustrate the effectiveness of memoization and different approaches, let’s do some quick tests. I will stress test all three methods that we have seen so far. Here is the set up:

Here are the results (in seconds):

You can see that the pure recursive approach takes about 500x more time than the iterative approach and about 1300x more time than the recursive approach with memoization. Note that this discrepancy would grow rapidly with the length of the runway. I encourage you to try running it yourself.

## Step 7: Determine Time complexity

There are some simple rules that can make computing time complexity of a dynamic programming problem much easier. Here are two steps that you need to do:

In our example problem, the number of states is |P| * |S|, where

The work done per each state is O(1) in this problem because, given all other states, we simply have to look at 3 subproblems to determine the resulting state.

As we noted in the code before, |S| is limited by length of the runway (|P|), so we could say that the number of states is |P|² and because work done per each state is O(1), then the total time complexity is O(|P|²).

However, it seems that |S| can be further limited, because if it were really |P|, it is very clear that stopping would not be possible because you would have to jump the length of the entire runway on the first move.

So let’s see how we can put a tighter bound on |S|. Let’s call maximum speed S. Assume that we’re starting from position 0. How quickly could we stop if we were trying to stop as soon as possible and if we ignore potential spikes?

In the first iteration, we would have to come at least to the point (S-1), by adjusting our speed at zero by -1. From there we would at a minimum go by (S-2) steps forward, and so on.

For a runway of length L , the following has to hold:

=> (S-1) + (S-2) + (S-3) + ….+ 1 < L

=> S*(S-1) / 2 < L

=> S² — S — 2L < 0

If you find roots of the above function, they will be:

r1 = 1/2 + sqrt(1/4 + 2L) and r2 = 1/2 — sqrt(1/4 + 2L)

We can write our inequality as:

(S — r1) * (S — r2) < ; 0

Considering that S — r2 > 0 for any S > 0 and L > 0, we need the following:

S — 1/2 — sqrt(1/4 + 2L) < ; 0

=> S < 1/2 + sqrt(1/4 + 2L)

That is the maximum speed that we could possibly have on a runway of a length L. If we had a speed higher than that, we could not stop even theoretically, irrespective of the position of the spikes.

That means that the total time complexity depends only on the length of the runway L in the following form:

O(L * sqrt(L)) which is better than O(L²)

O(L * sqrt(L)) is the upper bound on the time complexity

Awesome, you made it through! :)

The 7 steps that we went through should give you a framework for systematically solving any dynamic programming problem. I highly recommend practicing this approach on a few more problems to perfect your approach.

## Here are some next steps that you can take

When you feel like you’ve conquered these ideas, check out Refdash where you are interviewed by a senior engineer and get a detailed feedback on your coding, algorithms, and system design.

Originally published at Refdash blog . Refdash is an interviewing platform that helps engineers interview anonymously with experienced engineers from top companies such as Google, Facebook, or Palantir and get a detailed feedback. Refdash also helps engineers discover amazing job opportunities based on their skills and interests.

Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started

## How to solve a Dynamic Programming Problem ?

Dynamic Programming (DP) is a technique that solves some particular type of problems in Polynomial Time . Dynamic Programming solutions are faster than the exponential brute method and can be easily proved their correctness.

## Step 2: Deciding the state

Dynamic Programming problems are all about the state and its transition . This is the most basic step which must be done very carefully because the state transition depends on the choice of state definition you make.

A state can be defined as the set of parameters that can uniquely identify a certain position or standing in the given problem. This set of parameters should be as small as possible to reduce state space.

In our famous Knapsack problem , we define our state by two parameters index and weight i.e DP[index][weight]. Here DP[index][weight] tells us the maximum profit it can make by taking items from range 0 to index having the capacity of sack to be weight. Therefore, here the parameters index and weight together can uniquely identify a subproblem for the knapsack problem.

The first step to solving a Dynamic Programming problem will be deciding on a state for the problem after identifying that the problem is a Dynamic Programming problem. As we know Dynamic Programming is all about using calculated results to formulate the final result.  So, our next step will be to find a relation between previous states to reach the current state .

## Step 3: Formulating a relation among the states

This part is the hardest part of solving a Dynamic Programming problem and requires a lot of intuition, observation, and practice.

Example:

Given 3 numbers {1, 3, 5}, The task is to tell the total number of ways we can form a number N using the sum of the given three numbers. (allowing repetitions and different arrangements).

The total number of ways to form 6 is: 8 1+1+1+1+1+1 1+1+1+3 1+1+3+1 1+3+1+1 3+1+1+1 3+3 1+5 5+1

The steps to solve the given problem will be:

How to Compute the state?

As we can only use 1, 3, or 5 to form a given number N . Let us assume that we know the result for N = 1,2,3,4,5,6  Let us say we know the result for: state (n = 1), state (n = 2), state (n = 3) ……… state (n = 6)  Now, we wish to know the result of the state (n = 7). See, we can only add 1, 3, and 5. Now we can get a sum total of 7 in the following 3 ways:

1) Adding 1 to all possible combinations of state (n = 6)   Eg : [ (1+1+1+1+1+1) + 1]  [ (1+1+1+3) + 1]  [ (1+1+3+1) + 1]  [ (1+3+1+1) + 1]  [ (3+1+1+1) + 1]  [ (3+3) + 1]  [ (1+5) + 1]  [ (5+1) + 1]  2) Adding 3 to all possible combinations of state (n = 4); [(1+1+1+1) + 3]  [(1+3) + 3]  [(3+1) + 3]  3) Adding 5 to all possible combinations of state(n = 2)   [ (1+1) + 5] (Note how it sufficient to add only on the right-side – all the add-from-left-side cases are covered, either in the same state, or another, e.g. [ 1+(1+1+1+3)]  is not needed in state (n=6) because it’s covered by state (n = 4) [(1+1+1+1) + 3]) Now, think carefully and satisfy yourself that the above three cases are covering all possible ways to form a sum total of 7; Therefore, we can say that result for  state(7) = state (6) + state (4) + state (2)  OR state(7) = state (7-1) + state (7-3) + state (7-5) In general,  state(n) = state(n-1) + state(n-3) + state(n-5)

Below is the implementation of the above approach:

Time Complexity: O(3 N ), As at every stage we need to take three decisions and the height of the tree will be of the order of n. Auxiliary Space: O(N), The extra space is used due to the recursion call stack.

The above code seems exponential as it is calculating the same state again and again. So, we just need to add memoization .

## Step 4: Adding memoization or tabulation for the state

This is the easiest part of a dynamic programming solution. We just need to store the state answer so that the next time that state is required, we can directly use it from our memory.

Adding memoization to the above code

Time Complexity: O(n), As we just need to make 3n function calls and there will be no repetitive calculations as we are returning previously calculated results. Auxiliary Space: O(n), The extra space is used due to the recursion call stack.

Another way is to add tabulation and make the solution iterative. Please refer to tabulation and memoization for more details. Dynamic Programming comes with lots of practice. One must try solving various classic DP problems that can be found here .

You may check the below problems first and try solving them using the above-described steps:-

Solve DSA problems on GfG Practice.

• prashant11_4
• ayushbhardwaj1
• srivastavaharshit848
• dharanendralv23
• shankarmoturi29
• abhijeet19403
• 4wwcz6wjzcry0i65yxjbviufuxk0z8itc3ix23cn
• aashutoshparoha

## Data Structures & Algorithms in JavaScript - Self Paced

Improve your coding skills with practice.

LTCWM > Blog > Resources > Technical guides > How to Solve Any Dynamic Programming Problem

## How to Solve Any Dynamic Programming Problem

Updated on April 12th, 2020 | Sign up for learn to code tips

If you’re aiming for a top-tier tech job, you have to face the coding interview —and come out on top. And one sure-fire way to impress is to have dynamic programming down pat.

In today’s special guest post, Sam Gavis-Hughson guides us through his formula for solving any dynamic programming problem.

Take it away, Sam!

Disclosure: I’m an affiliate for Sam's courses. While the resources mentioned in this post are free, I may get a small commission if you click the links below and later buy one of his products. Thanks!

When it comes to coding interviews , not all topics are created equal.

Some are relatively easy. For example, even the hardest linked list problems don’t tend to be that difficult because the concept is on the simpler side.

But then there are some topics where even the easiest variations strike fear into the hearts of interviewees everywhere.

One such topic is dynamic programming—an optimization technique programmers can use to speed up our code when we are repeatedly solving the same problem.

Dynamic programming has truly become the defacto hard topic that your interviewer can ask you. If they want to really put you through your paces, that’s what they’ll ask about.

This means that if you’re interviewing for any top tech company , dynamic programming should be at the top of your list of topics to prepare.

## What is Dynamic Programming?

Essentially, dynamic programming is a way of making a recursive algorithm more efficient by making sure it doesn’t have to solve the same subproblem twice.

When you use dynamic programming, it stores the results of subproblems so you don’t have to re-compute those results next time they’re needed. It’s an alternative to plain recursion, which requires repeating the solution process every time the subproblem is encountered.

This Stack Overflow answer words it well: “Dynamic programming is when you use past knowledge to make solving a future problem easier.”

Some benefits of dynamic programming are that it saves you coding time, reduces lines of code, and speeds up an algorithm’s processing time.

Now, here’s the crazy thing…

## Dynamic Programming Doesn’t Have to Be Hard

The real challenge with dynamic programming is that it is counterintuitive. When you’re trying to solve dynamic programming problems, all the obvious steps that you would normally take actually pull you further away from the correct solution:

• Want to find the optimal solution? You actually need to start with the brute force approach.
• Want to find an iterative solution? You have to start with recursion.
• Want to solve the problem as quickly as possible? You need to slow it down and go step by step.

Click To Tweet

So if dynamic programming is so counterintuitive, how are we ever supposed to solve these problems effectively?

To start, let’s look at how most people prepare for their coding interviews:

They focus on memorizing solutions.

Rather than being strategic and understanding the underlying techniques, most people focus on simply memorizing as many solutions as they can.

Memorizing gives you a quick and easy win. But the problem is that it ultimately handicaps you.

When you focus on memorizing, your interview prep strategy becomes very simple: just go through as many problems as you can. When you go into an interview, you hope that you see a problem that you’ve studied before.

But this approach quickly leads to diminishing returns. There’s only so much that you can actually memorize, and the number of problems that you could be asked is very large.

Not to mention that this approach prevents you from actually being able to connect the dots.

Imagine learning a new language (let’s say French). You decide that you are going to create a massive deck of flashcards and simply memorize individual words. This is your plan to get to fluency.

So you start memorizing. Here’s one sample set of words:

“suis”, “es”, “est”, “sommes”, “êtez”, “sont”

What is the connection between these words (if you already know French, pretend you don’t for a sec)?

On the surface, it’s not obvious. So if you were just memorizing, you would be memorizing 6 discrete words.

However, there actually is a very close connection between these words: They’re all different conjugations of the verb “to be.”

If we look at the translations, we see:

• “Je suis” – “I am”
• “Tu es” – “You are”
• “Elle est” – “She is”
• “Nous sommes” – “We are”
• “Vous êtez” – “You all are”
• “Ils sont” – “They are”

Notice how much easier this is now that we’ve connected them all in some way that is meaningful to us? Yes we still need to memorize the specifics, but now we can see what connects them.

So what if we could do the same thing with dynamic programming?

Well, it’s never going to happen if we just try to memorize solutions to different problems. The issue is that the similarity between these different problems ISN’T in the solution itself.

The similarity between all dynamic programming problems is in the process.

For the rest of this post, I’m going to show you the exact strategy that you can use to solve any dynamic programming problem, even if you’ve never seen the problem before.

The remainder of this post is excerpted from my free ebook, Dynamic Programming for Interviews, which you can download here .

## How to Solve Dynamic Programming Problems Using the Fast Method

What is the most important characteristic of any successful interviewee?

They have a repeatable strategy.

That’s exactly what the FAST Method is. It’s a repeatable strategy for solving any dynamic programming problem, whether you’ve seen the problem before or not.

The FAST Method is an acronym for the 4 steps you need to solve any dynamic programming problem:

• F ind the First Solution
• A nalyze the First Solution
• Identify the S ubproblems
• T urn around the solution

By following these 4 steps, you’ll be able to find the optimal dynamic programming solution for any problem.

Start coding now

Stop waiting and start learning! Get my 10 tips on teaching yourself how to code.

## 1. Find the First Solution

The first step for any dynamic programming problem (and the step that most people skip) is to find an initial brute-force solution to the problem.

The goal here is to just get something down on paper without any concern for efficiency. To be honest, this should be the first step for any problem you might solve, but it is particularly important for dynamic programming.

There are several keys to think about while doing this step that you’ll want to keep in mind:

• The solution should be recursive. If not, the problem probably isn’t a good candidate for dynamic programming. For a lot more info on effectively coming up with a recursive solution, look here .
• Each recursive call must be self-contained. This means that you cannot store your results to a global variable or pass a results variable into your function. I often refer to the required approach as “building up as you return” and you can learn more about that here .
• It is important that we minimize the number of variables that we are passing into our recursive function. Dynamic programming solutions rely on there being multiple recursive calls with the same input, and the more variables there are, the less the inputs will overlap.

With these basic rules in mind, you will have no trouble using this brute force solution in the FAST Method.

Check out Dynamic Programming for Interviews for detailed walkthroughs of 5 of the most popular dynamic programming problems.

## 2. Analyze the First Solution

This is the step where we decide whether we can actually use dynamic programming to solve a problem. To do this, we’re going to look at a couple of specific things.

First off, we should be sure to determine what the actual time complexity of our code is currently. One mistake that I see fairly often is attempting to optimize something that doesn’t need to be optimized. If our solution is already efficient, spending a lot of time optimizing is a real waste of time.

With most of our recursive functions, we can use a pretty simple heuristic to compute the runtime. We simply look at the branching factor of our recursive function raised to the depth.

For example, if we were finding all combinations of an input, that would give us a time complexity of `O(2n)`. You can read a lot more about this here .

Next up, if our solution is in fact inefficient (we’re most likely looking for something that is exponential time or worse as being inefficient), we want to see if we can optimize it using dynamic programming.

Dynamic programming actually requires us to meet 2 specific criteria. If we don’t, then it is not possible for us to optimize our problem using dynamic programming. However, if we do, we will likely see a major improvement.

These are the criteria that we need to look for:

## Optimal Substructure

The first criterion is that our problem must have optimal substructure. Technically speaking, this means that we must be able to find an optimal solution to a problem by solving for its subproblems. An easier way to think about this is simply that we must be able to solve the problem recursively. If we already found a recursive problem in the previous step, then we can safely assume we have optimal substructure.

## Overlapping Subproblems

This is where the actual optimization comes in. If our problem has overlapping subproblems, that means that we are calling the same function with the exact same inputs multiple times. So we’re doing repetitive work for no reason. If we were to cache (or “memoize”) the results, we would be able to save a lot of time.

If we meet these two criteria, then we know that we can optimize our solution using dynamic programming.

## 3. Identify the Subproblems

If we find that we are able to use dynamic programming, the next step is to clearly identify the subproblems. Defining the subproblem in plain English is going to make it much easier for us to understand everything that is going on.

To approach this step, we are going to specifically look at our recursive code and see what recursive calls are being made. We then want to define in plain English what the actual meaning of that recursive call is.

For example, if we are computing the nth Fibonacci number, we have 2 recursive calls, fib(n-1) and fib(n-2). To define these in plain English, the function simply returns the nth Fibonacci number. Pretty simple.

Once we’ve identified what the subproblems are, we can also memoize our recursive solution to make it more efficient. All this means is that we are going to add an array or HashMap that stores all of the values that we’ve previously computed.

Each time we make a function call, we will look in our array to see if a result has already been computed for the current inputs. If so, then we can return it without actually computing anything. If not, we compute it and then save it into our array.

For full code and example problems, download Dynamic Programming for Interviews .

## 4. Turn Around the Solution

If you’ve ever spent any serious time studying dynamic programming solutions in the past, you may have noticed that the vast majority of them are iterative, not recursive. And yet up to this point in the FAST Method, we have only generated recursive solutions.

In this optional final step of the process, we will take our recursive solution and make it into an iterative solution.

When doing dynamic programming, we really have two different options:

A top-down solution is the recursive solution that we found in the previous step. We call this top-down because we are starting with the goal result that we’re trying to get (ie. fib(5)) and then repeatedly split it into smaller subproblems until we reach our base case.

Bottom-up is simply the opposite of that. Rather than starting with our target input, we start with the base cases. We iteratively compute larger and larger subproblems until we reach our target result.

Both of these approaches will give us the same worst case complexity. However, many people prefer the bottom-up approach because recursive code tends to execute slower than iterative code that does the same work, given that it requires additional overhead.

The key to turning around the solution and finding a bottom-up solution is to look at what the smallest subproblems are. This is why we needed to carefully identify and define the subproblems in the previous step.

We can start with computing our base case. Then we need to determine how to compute a given subproblem, assuming all the smaller subproblems have already been computed. We’ll save all of these subproblem solutions into an array so that we can easily look them up.

See examples of exactly how to do this in my free ebook, Dynamic Programming for Interviews .

At the end of the day, dynamic programming is a challenging topic. It’s not something that you can magically become a master at overnight.

However, it also isn’t something you have to be afraid of. If you nail down your recursion skills and understand the FAST Method, even the most challenging dynamic programming problems can be easily solved during your interview.

Sam Gavis-Hughson, founder of Byte by Byte, helps software engineers successfully interview for jobs at top tech companies. He is the author of Dynamic Programming for Interviews , the ebook that shows anyone how to succeed at dynamic programming interviews.

## DAA Tutorial

Asymptotic analysis, analysis of sorting, divide and conquer, lower bound theory, sorting in linear time, binary search trees, red black tree, dynamic programming, greedy algorithm, backtracking, shortest path, all-pairs shortest paths, maximum flow, sorting networks, complexity theory, approximation algo, string matching.

Interview Questions

## Learn Latest Tutorials

Transact-SQL

Reinforcement Learning

R Programming

React Native

Python Design Patterns

Python Pillow

Python Turtle

## Preparation

Verbal Ability

Company Questions

## Trending Technologies

Artificial Intelligence

Cloud Computing

Data Science

Machine Learning

## B.Tech / MCA

Data Structures

Operating System

Computer Network

Compiler Design

Computer Organization

Discrete Mathematics

Ethical Hacking

Computer Graphics

Software Engineering

Web Technology

Cyber Security

C Programming

Control System

Data Mining

Data Warehouse

## Javatpoint Services

JavaTpoint offers too many high quality services. Mail us on h [email protected] , to get more information about given services.

## Training For College Campus

JavaTpoint offers college campus training on Core Java, Advance Java, .Net, Android, Hadoop, PHP, Web Technology and Python. Please mail your requirement at [email protected] . Duration: 1 week to 2 week

## Dynamic Programming: An Approach to Solving Computing Problems

Sometimes in computer science, you will run into problems. You can divide these into subproblems, which can, in turn, be divided into smaller subproblems. If the smaller subproblems overlap, then you can save the result in memory for future reference. This way, you don’t need to compute the same result multiple times, thus increasing the efficiency of the program substantially. This way of solving these problems is referred to as dynamic programming.

In this article, you will learn what dynamic programming is. I will also show how to compute Fibonacci numbers, which is a simple problem that dynamic programming can solve. I will compare the dynamic programming solutions to the naive solution that uses recursion. These examples are written in Python syntax. Finally, I’ll also give some general pointers to keep in mind when attempting to solve problems using dynamic programming

## Dynamic programming

More From Artturi Jalli: Python Cheat Sheet: A Handy Guide to Python

## What Types of Problems Can Dynamic Programming Solve?

Dynamic programming is typically a way to optimize solutions to certain problems that use recursion. If a recursive solution to a problem has to compute solutions for subproblems with the same inputs repeatedly, then you can optimize it through dynamic programming. As mentioned earlier, in this case, you would simply save the result of the computation for use later if and when it’s needed. This optimization can reduce the time complexity of an algorithm from exponential time to polynomial time. This means that the number of computations n scales like a polynomial expression instead of scaling like an exponential expression as n increases. In general, polynomial expressions grow much slower than exponential expressions.

There are two conditions that need to be satisfied to use dynamic programming:

## What Are Overlapping Subproblems?

I alluded to the overlapping subproblems condition earlier. It simply means that when solving the problem in question, the solutions for the same subproblems are repeatedly necessary. In this case, the solutions to these subproblems can be stored for later use to skip recomputation.

Another way to think about this condition is to turn it upside down. If there are no overlapping subproblems, then there’s no point in saving the solutions for the subproblems, and you can’t use dynamic programming.

There are two different ways of storing the solutions to the overlapping subproblems:

## What Is Memoization?

The memoization approach to dynamic programming is very similar to the naive recursion approach, with only a small change. The difference is that we use a lookup table to store solutions to the subproblems, and then use this table to check whether that solution already exists.

If the solution for a certain subproblem already exists in the lookup table, then that solution is returned from the lookup table. If it does not, then it needs to be computed (through recursion) and added to the lookup table.

For the sake of clarity, let’s define a solution for a subproblem in our dynamic programming problem to be DP[X] ., with DP[N] being the desired solution and DP[0] being the base solution. In the memoization approach, the program starts from DP[N] and asks for the solutions from which DP[N] can be reached (these should be subproblems of lower order DP[n<N]) . Then, from these states, the same process is repeated recursively, until reaching the base solution DP[0] .

If this feels a little too abstract, don’t worry. The examples introduced later in this article should clarify what I mean.

Memoization is known as a top-down approach to dynamic programming since the program will start from the desired, last (top) state and work its way down recursively.

## What Is Tabulation?

The tabulation approach to dynamic programming works in a reverse manner compared to the memoization approach. The program will start from the base (or bottom) solution for the subproblem and work its way up, solving the subproblems one by one until reaching the desired solution.

In terms of solutions to subproblems, the tabulation approach starts from the base solution DP[0] and then calculates DP[1], DP[2], … DP[N] until reaching the desired solution for the subproblem DP[N] . Since we started from the base solution DP[0] and worked towards the desired solution DP[N] , the tabulation approach is also known as a bottom-up approach.

Again, the examples below should make this easier to understand.

## What Is Optimal Substructure Property?

If the optimal solution to a problem can be obtained using the optimal solution to its subproblems, then the problem is said to have optimal substructure property .

As an example, let’s consider the problem of finding the shortest path between ‘Start’ and ‘Goal’ nodes in the graph below. The nodes are connected to other nodes via edges, and the distance between two connected nodes is marked with a number next to the edge.

The shortest path from the Start node to the Goal node is through nodes three and four.

This problem clearly has optimal substructure property. Since the shortest path from the Start node to the Goal node goes through Node Four, it clearly follows that this path is a combination of the shortest path from the Start node to Node Four and the shortest path from Node Four to the Goal node.

Many problems do not have optimal substructure property. For example, the problem of finding the longest path (with no cycles) between the Start node and the Goal node in the above graph doesn’t. Here’s how you can see this:

The longest path is: Start - Node Three - Node Two - Node One - Node Four - Goal. However, this does not imply that the longest path from the Start node to Node Two is Start - Node Three - Node Two. The longest path from the Start node to Node Two is actually Start - Node One - Node Four - Node Three - Node Two (and Start - Node One - Node Four - Goal - Node Two, which has equal length to the first one).

## Dynamic Programming Example: Calculating Fibonacci Numbers

One of the simplests examples of dynamic programming is the computation of Fibonacci numbers, which are numbers from the Fibonacci sequence. The first Fibonacci number is zero, the second is one, and the subsequent numbers are the sum of the previous two Fibonacci numbers. The 10 first Fibonacci numbers are  zero, one, one, two, three, five, eight, 13, 21, and 34.

Let’s first start with the naive, recursive solution. Here’s a Python function to calculate the nth Fibonacci number (indexing starts from zero):

From this example it is easy to see that this problem satisfies the overlapping subproblems condition since the function clearly has to calculate the previous Fibonacci numbers multiple times (when n > three). The smallest Fibonacci numbers are computed most often, when this function is called for a large n.

This problem also clearly has optimal substructure since there is only a single solution for the subproblems, which are used to compute the solution to the desired problem.

Due to the recursion, this function runs in exponential time.

Let’s next look at how this could be solved using dynamic programming. Let’s start with a top-down solution using memoization. Here’s a Python function that calculates the nth Fibonacci number that implements dynamic programming through memoization:

This approach is quite similar to the recursive approach since it starts from calculating the nth Fibonacci number and, in order to calculate it, goes onto calculating the n-1st and n-2nd Fibonacci numbers. The difference is in the lookup table, where the smaller Fibonacci numbers will be saved as they are calculated, so that they do not need to be calculated again and again.

This makes this function actually run in linear time instead of exponential time.

For the sake of an example, let’s also look at a Python function that solves the same problem in a bottom-up manner with dynamic programming using tabulation:

In this solution, the nth Fibonacci number is calculated in a bottom-up manner, starting by calculating the first Fibonacci number, then the second, third and so on until reaching the nth Fibonacci number. This function also runs in linear time.

More in Software Engineering: Glob Module in Python: Explained

## How to Solve Problems Using Dynamic Programming

The first step to solving a problem using dynamic programming is to identify it as a dynamic programming problem. If you can validate that the problem has overlapping subproblems and that it satisfies the optimal substructure property, you can be sure that you can solve it with dynamic programming.

The second step is to decide on a state expression. This state is the set of parameters that uniquely identifies the different subproblems.

In the Fibonacci numbers example, the parameter identifying the state would just be the serial number of a certain Fibonacci number.

There can be more than one parameter identifying a state. You should always use as few parameters as possible, however.

The third — and probably hardest — step in solving problems using dynamic programming is formulating the relation between the different states.

In the Fibonacci number case this is simple, however, since the nth Fibonacci number is the sum of the n-1st and n-2nd Fibonacci numbers. So F[n] = F[n-1] + F[n-2].

The fourth step is to implement memoization or tabulation for the states that you decided upon, using the relation you discovered between the states. This means simply saving the state (or in other words the solution to a certain subproblem) so it can be recovered from memory without recomputation when it is needed again. This should be fairly simple, if you did steps one to three well

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

## DoubleAS's blog

How to solve/approach a DP problem?

At first, I'm sorry for asking this question, you may find it stupid but I need help! :(

I've seen a good number of video tutorials on dynamic programming, read blogs. But still while trying to solve any DP tagged problem, I'm not even getting any clue about how to approach the problem. Please feel free to give any advice. Thank you.

## How to Solve Any Dynamic Programming Problem

Consistently get the right solution with the fast method.

Pramp Blog | Coding Interview & Job Search Resources for Developers

Dynamic programming. The last resort of any interviewer set on seeing you fail. Your interview has gone great up until this point, but now it grinds to a standstill.

Somewhere in the back of your mind you remember something about arrays and memoization, but the memory is hazy at best. You fumble, you trip, you drop the ball. Game over.

Of all the possible topics out there, dynamic programming seems to strike the most fear into people’s hearts during coding interviews.

But it doesn’t have to be that way.

Working with as many people as I do over at Byte by Byte , I started to see a pattern. People aren’t really afraid of dynamic programming itself. They are scared because they don’t know how to approach the problems.

With many interview questions out there, the solutions are fairly intuitive. You can pretty much figure them out just by thinking hard about them. For dynamic programming, and especially bottom-up solutions, however, this is not the case.

## What Is Dynamic Programming?

First and foremost, what is dynamic programming?

Dynamic programming is a method for solving a complex problem by breaking it down into smaller, subproblems.

Once you’ve found the solutions to these subproblems, it’s that much easier to find the solution to the complex one.

By solving the smaller subproblems first, we can avoid having to solve the same problem multiple times, which saves time and ensures the overall solution is more efficient.

Dynamic programming is something that software engineers often do when designing efficient algorithms.

Because of this, dynamic programming questions are commonly asked in coding interviews to test a candidate’s ability to think through complex problems and design efficient solutions.

Not only that, but dynamic programming questions are a great way for an interviewer to evaluate a candidate’s knowledge of algorithms and data structures , as well as their ability to apply these concepts to real-world problems.

To learn more about dynamic programming and other software engineering topics, be sure to check out Exponent’s Software Engineering Interview Course .

## The FAST Method

Dynamic programming solutions are generally unintuitive. When solving the Knapsack problem , why are you creating an array and filling in random values? What do those values mean?

This is why I developed the FAST method for solving dynamic programming problems.

The FAST method is a repeatable process that you can follow every time to find an optimal solution to any dynamic programming problem.

Rather than relying on your intuition, you can simply follow the steps to take your brute force recursive solution and make it dynamic.

The FAST method comprises 4 steps: Find the F irst solution, A nalyze the solution, identify the S ubproblems, and T urn around the solution. I will explain how to use these steps using the example of computing the nth Fibonacci number.

## Find the first solution

The FAST method is built around the idea of taking a brute force solution and making it dynamic. Therefore, the first step is to find that brute force solution. In the case of finding the nth Fibonacci number, we can just write a simple recursive function:

This solution is really inefficient, but it doesn’t matter at this point, since we’re still going to optimize it.

## Analyze the solution

Next, we need to analyze our solution. If we look at the time complexity of our fib() function, we find that our solution will take O(2^n) time.

Each recursive call subtracts 1 from n and results in two child calls. Therefore, the depth of our recursion is n , and each level has twice as many calls.

1 + 2 + 4 + … + 2^n-1 = ²⁰ + ²¹ + ²² + … + 2^n-1 = O(2^n).

To optimize a problem using dynamic programming, it must have optimal substructure and overlapping subproblems . Does our problem have those? It definitely has an optimal substructure because we can get the right answer just by combining the results of the subproblems.

It also has overlapping subproblems. If you call fib(5), that will recursively call fib(4) and fib(3). fib(4) then recursively calls fib(3) and fib(2). It’s clear that fib(3) is being called multiple times during the execution of fib(5) and therefore we have at least one overlapping subproblem.

With these characteristics, we know we can use dynamic programming.

## Identify the subproblems

Unlike some problems, it’s pretty easy to identify and understand the subproblems for our Fibonacci numbers.

The subproblems are just the recursive calls of fib(n-1) and fib(n-2). We know that the result of fib(c) is just the cth Fibonacci number for any value of c, so we have our subproblems.

With our subproblems defined, let’s memoize the results. All this means is that we’ll save the result of each subproblem as we compute it and then check before computing any value whether it’s already computed. Doing this only requires minimal changes to our original solution.

Our new solution only has to compute each value once, so it runs in O(n) time. Because of the cache, though, it also uses O(n) space.

## Turn around the solution

The final step is to make our solution iterative (or bottom-up). All we have to do is flip it around.

With our previous (top-down) solution, we started with n and repeatedly broke it down into smaller and smaller values until we reached n == 1 and n == 0. Now, instead, we’ll start with the base cases and work our way up. We can compute the Fibonacci number for each successive value of n until we get to our result.

With this final solution, we again use O(n) time and O(n) space. (Note: You can actually do this in O(1) space, but that’s beyond the scope of this post.)

Dynamic programming doesn’t have to be hard or scary. By following the FAST method, you can consistently get the optimal solution to any dynamic programming problem as long as you can get a brute force solution.

Knowing the theory isn’t sufficient, however.

It is critical to practice applying this methodology to actual problems. If you’re looking for more detailed examples of how to apply the FAST method, check out my free ebook, Dynamic Programming for Interviews .

Once you go through the examples in the book, once you’ve understood them and applied them in your practice, you’ll be able to go into any interview with confidence, knowing that not even dynamic programming will trip you up.

## Top Dynamic Programming Questions in Coding Interviews

By Exponent & ChatGPT

Looking for more examples of dynamic programming questions and solutions? Here are some commonly asked in software engineering interviews.

## What is the knapsack problem, and how can it be solved using dynamic programming?

The knapsack problem is a classic problem in computer science and is sometimes asked in dynamic programming interviews.

The problem asks you to find the subset of items that can be placed in a knapsack with a certain capacity such that the total weight is less than or equal to the capacity while maximizing the total value.

To solve the knapsack problem using dynamic programming, we can create a two-dimensional array that represents the items and the capacity of the knapsack.

Fill the array with the maximum value that can be obtained for each weight and capacity combination. The algorithm then considers each item and determines whether to include it in the final solution, based on the weight and value of the item and the capacity of the knapsack.

Repeat this process for each item, and the final solution is the subset of items that results in the maximum value in the array.

An example of this in Java would look something like this:

This method takes in three parameters: weight , value , and capacity . The weight and value parameters are arrays of integers representing the weights and values of each item, respectively. The capacity parameter is an integer representing the maximum weight that the knapsack can hold.

The method returns a List of Integer objects representing the indices of the items that should be included in the final solution.

This code first creates a two-dimensional array to store the maximum value that can be obtained for each combination of items and knapsack capacity. It then loops through the items and fills in the array using a dynamic programming approach.

Finally, it constructs the list of items in the final solution by working backwards through the array and adding the items that were included in the solution to the list. The list of items is then returned as the result of the method.

## What is the longest common subsequence problem, and how can it be solved using dynamic programming?

Sure, the longest common subsequence (LCS) problem is a problem in computer science that involves finding the longest subsequence that is common to two or more strings of characters.

To solve the LCS problem using dynamic programming, we can create a two-dimensional array that represents the characters in the two strings. The array is filled in with the length of the LCS for each combination of characters in the two strings. The algorithm then compares each pair of characters in the strings and determines whether to include them in the LCS, based on the characters that come before them in the strings. This process is repeated for each pair of characters, and the final solution is the LCS that results in the maximum length in the array.

Here is an example of how this approach can be implemented in Java:

This implementation of the solveLCS method creates a two-dimensional array to store the lengths of the LCS for each combination of characters in the two strings. It then loops through the characters in the strings and fills in the array with the LCS lengths using a dynamic programming approach. Finally, it reconstructs the LCS itself by working backwards through the array and adding the characters that are included in the LCS to a StringBuilder object. The final LCS is returned as a string.

## How do you solve the traveling salesperson problem using dynamic programming?

The traveling salesperson problem (TSP) is a well-known NP-hard problem in computer science. It involves finding the shortest possible route that visits each city exactly once and returns to the starting city.

One approach to solving the TSP using dynamic programming is to first compute the distances between all pairs of cities. This can be done using a distance matrix, where the entry at row i and column j is the distance between city i and city j.

Next, we can create a two-dimensional array, with rows representing the cities and columns representing the number of cities visited so far. The entry at row i and column j in this array represents the minimum distance from city i to all other cities, given that j cities have already been visited.

To fill in the entries in this array, we can use the following recursive algorithm:

Where D is the two-dimensional array, dist is the distance matrix, i is the current city, j is the number of cities visited so far, and k is a previously visited city. This formula states that the minimum distance from city i to all other cities, given that j cities have already been visited, is the minimum of the distance from city i to all other cities without visiting any additional cities (D[i][j]), or the distance from a previously visited city k to all other cities, plus the distance from city k to city i (D[k][j-1] + dist[k][i]).

Once the array is filled in, the minimum distance for the TSP is the entry at row 0 and column n, where n is the total number of cities. This represents the minimum distance from the starting city (city 0) to all other cities, given that all cities have been visited.

To reconstruct the actual path, we can trace back through the array starting from this entry and using the same recursive structure. At each step, we choose the previous city k that resulted in the minimum distance from city i to all other cities, given that j cities have been visited. We then move to city k and repeat the process until we reach the starting city. The resulting sequence of cities is the shortest possible route for the TSP.

## Practice Makes Perfect

Dynamic Programming is hard, there’s no way around it. The best way you can prepare and become confident in your abilities to solve dynamic programming problems in interviews is through plenty of practice.

It’s the combination of knowledge, technique, and practice, that’ll enable you to land your dream engineering job.

To see you off, check out this software engineering interview example from our friends at Exponent :

Sam is the founder and CEO of Byte by Byte , a site helping software engineers study for their interviews. After seeing so many people struggling with dynamic programming, he decided to do something about it. He is the author of Dynamic Programming for Interviews .

Thanks to Sam Gavis-Hughson

## Written by Pramp

The Pramp Team

Text to speech

#### IMAGES

1. Madamwar: Dynamic Programming Problems List

2. How to Solve Any Dynamic Programming Problem

3. PPT

4. Dynamic Programming

5. Madamwar: Dynamic Programming Problems And Solutions

6. How to Solve Any Dynamic Programming Problem

#### VIDEO

1. Programming for Problem Solving

2. Dynamic Programming Problem for || solving non-linear programming problems || Minimization Type

3. 5.10 Formulating Algorithms: Nested Control Statements

4. Dynamic Programming Problem for || solving non-linear programming problems || Maximization Type 2

5. [Hindi] 879. Profitable Schemes

6. Dynamic Programming: Introduction

1. What Are the Six Steps of Problem Solving?

The six steps of problem solving involve problem definition, problem analysis, developing possible solutions, selecting a solution, implementing the solution and evaluating the outcome. Problem solving models are used to address issues that...

2. How to Solve Common Maytag Washer Problems

Maytag washers are reliable and durable machines, but like any appliance, they can experience problems from time to time. Fortunately, many of the most common issues can be solved quickly and easily. Here’s a look at how to troubleshoot som...

3. Solving Bosch Dishwasher Problems: A Step-by-Step Guide

Having a dishwasher is a great convenience, but when it stops working properly, it can be a major inconvenience. Bosch dishwashers are known for their reliability and durability, but they can still experience problems from time to time.

4. Follow these steps to solve any Dynamic Programming interview

Sample DP Problem · Step 1: How to recognize a Dynamic Programming problem · Step 2: Identify problem variables · Step 3: Clearly express the

5. How to solve a Dynamic Programming Problem ?

Identify if it is a Dynamic programming problem. · Decide a state expression with the Least parameters. · Formulate state and transition

6. 5 Simple Steps for Solving Dynamic Programming Problems

In this video, we go over five steps that you can use as a framework to solve dynamic programming problems. You will see how these steps are

7. How to Solve ANY Dynamic Programming Problem

A comprehensive and exhaustive guide to answering ANY DP problem that may come your way. Let me know if you have any questions :) ARTICLE:

8. How to Solve Any Dynamic Programming Problem

How to Solve Dynamic Programming Problems Using the Fast Method · 1. Find the First Solution · 2. Analyze the First Solution · 3. Identify the

9. Steps to solve any Dynamic Programming Problem

Steps to solve any Dynamic Programming Problem · Step 1: Recognize a DP problem · Step 2: Identify problem variables · Step 3: Clearly express the recurrence

10. How to solve a dynamic programming problem

In the case of recursion, repeated calls are made for the same sub-problem, but we can optimize this problem with the help of dynamic programming. The idea

11. Dynamic Programming: An Approach to Solving Computing Problems

The first step to solving a problem using dynamic programming is to identify it as a dynamic programming problem. If you can validate that the

12. How to solve/approach a DP problem?

2) Look at the constraints, the constraints give you a good idea about the dimension of the dp. For example : 1<=n<=500, 1<=k<=n so you can see that dp[n][k] is

13. How can one start solving dynamic programming problems?

To get the basic idea behind solving DP problems, go to HackerRank and in algorithms domain, select dynamic programming, and solve all the problems in the easy

14. How to Solve Any Dynamic Programming Problem

To solve the LCS problem using dynamic programming, we can create a two-dimensional array that represents the characters in the two strings. The