With the help of dynamic programming (commonly referred to as PD), you can deal with some complicated problems in life. In fact, using dynamic programming to solve many kinds of problems will not only improve your skills but also save you time. So what is dynamic programming? What is it about? Before answer these questions, let’s look at the conversation from AlgoMonster below to get a basic idea first.
A *writes down “2+2+2+2=”*
A: “What’s that equal to?”
B: *counting* “It is Eight!”
A *writes down one more “2+” on the left*
A: “What about now?”
B: *quickly* “Ten!”
A: “How did you figure out the right answer that fast?”
A: “You just added one more 2”
A: “You have remembered the answer you calculated earlier, so you didn’t have to do it once again!”
Those who can’t remember what you’ve done before have to do it all over again to get the same problem solved. So dynamic programming basically is another way to say “memorizing solutions to solve similar problems easier and faster”. You save the results for future use and save energy and time later. The conversation above might be a good way to explain what dynamic programming is to a 6-year-old.
What is dynamic programming?
Dynamic programming is a problem-solving method in computer science. It also works as a mathematical optimization tool. Richard Bellman created the approach in the 1950s, and it has since been used in a variety of fields ranging from economics to aerospace engineering.
It means to break down a big subject into smaller sub-problems in a recursive manner in both situations. While some decision issues cannot be decomposed in this manner, decisions that involve multiple points in time are frequently dismantled in this manner.
Dynamic programming is a type of mathematical optimization that involves breaking down a decision into a series of decision steps over time. This is accomplished by creating a series of value functions.
The goal in economics is to maximize (rather than minimize) some dynamic social welfare function. This purpose, in Ramsey’s problem, connects consumption to utility levels. The inter-temporal choice is the trade-off that the planner faces between current consumption and future consumption (by investment in a capital stock that is employed in output).
In simpler words: dynamic programming solves a complicated problem by dividing it into a number of smaller problems. DP solves simpler problems one by one and remembers the answers to each. Hence, DP gets the whole problem solved at last.
How does the dynamic programming approach operate?
The following is a general approach to DP issues:
- Create a working naive recursive or iterative version.
- Note that the function is performing redundant tasks.
- Find a strategy to avoid completing that extra work, which is commonly done through memorization.
To make it more specific and easier to understand, here’re the detailed steps of how the dynamic programming work:
- DP simplifies a major problem by dividing it into many sub-problems.
- Dynamic programming determines the best solution for each of these sub-problems.
- DP keeps track of the outcomes of sub-problems, which means it can memorize the previous results for later use.
- Dynamic programming can use the previous result so that you don’t have to calculate the same sub-problem again.
- DP solves the whole problem through the results of each sub-problem.
Is it a problem suitable for DP?
You may find dynamic programming useful. But not all problems can be solved by dynamic programming. There’re conditions when you can apply DP to these problems. How do you spot if a problem is well suited to dynamic programming solutions?
Dynamic programming is a problem-solving technique for overlapping subproblems. A dynamic programming approach solves each subproblem just once before saving the result in a table or array. Saving the time and effort of calculating the answer again each time the subproblem arises.
So to know if you can solve problems with dynamic programming solutions, you should check if those issues are with multiple overlapping subproblems and optimal substructures. “Optimal substructure” means to tackle the optimization-related issues by combining the best solutions to all subproblems.
Two features of DP
- Multiple overlapping subproblems:
We’ll know there’re overlapping subproblems when a recursive algorithm visits the identical subproblems many times. It means we have to conduct computations on the same problems again and again. If the problem overlaps, it means you can use the solution multiple times. That’s how you can benefit from using DP.
Assume you have a problem solved and you saved the result. You will save a lot of time if similar problems often occur. What good is it to store the process and solution if the same issue never happens again? Well, that is the case here. We only memorize the results for overlapping subproblems.
Example: To compute F(n), we must compute F(n-1) and F(n-2), and while computing F(n+1), we must compute F(n), F(n-1). As a result, F(n-1) is a common subproblem for F(n) and F(n+1).
- Optimal substructures:
If an issue can be addressed recursively, it certainly has an optimal substructure. An optional structure means that you can solve the whole problem by solving each of the subproblems. If an optimal solution is constructed from optimal solutions of its subproblems, then you know it is an optimal substructure.
Example: If a to b to c is a path, and a to c is optimal if and only if a to b and b to c is optimal.
Sounds complicated to you? No, not at all. If all solutions for possible subproblems are optimal, it is sure to find the optimal solution for the problem itself. Every DP problem must have these two properties. Otherwise, using DP only adds an extra burden to your current work.
DP is a practical technique to help solve complex problems efficiently. That is because it saves effort and time by avoiding doing repeated work. Problems with overlapping subproblems and optional structures are well suited for DP solutions. Though you can not apply a DP solution to all problems, you can use dynamic programming in many fields. Examples include inventory management, resource allocation, shortest routes, and many other complex problems. It is also used in our daily life to save us time and energy.