Knapsack Problem In Analysis And Design Of Algorithms

Introduction

 
Hi guys! In this article, I am trying to explain the knapsack problem in the Analysis and design of algorithms. This article is really helpful for the students.
 
A thief robbing a store that can carry a maximal weight of 'w' into his knapsack. There are 'n' no of items in store available and ith items weight are Wand the  worth of thease items are Pi dollars. What items should the thief take and how?
 
So let’s see how to solve this thief problem.
 

What is the Knapsack Problem?

 
A thief went to a store to steal some items. There are multiple items available of different weights & profits.
 
Let suppose there are ‘n’ No. of items & weight of these are W1, W2,………Wn respectively, and the profit of these items are P1, P2,……….Pn respectively.
 
The thief wants to do steal in such a way so that his overall profit be ‘Maximum ’ and ‘Capacity constraint’ of knapsack don’t break.
 
Knapsack Problem may be of 2 types:
  • [A] 0/1 Knapsack problem
  • [B] Fractional Knapsack problem

0/1 Knapsack problem

 
In this problem, either a whole item is selected(1) or the whole item not to be selected(0).
 
Here, the thief can’t carry a fraction of the item.
 
In the LPP(Linear programming problem) form, it can be described as:
 
Formula
 
 

Fractional Knapsack problem

 
In this problem, a whole item can be selected (1) or a whole item can’t be selected (0) or a fraction of item can also be selected (between 0 or 1)
 
In the LPP(Linear programming problem) form, it can be described as:
 
 
So this Knapsack problem can be solved by using these following methods:
  1. Greedy method
  2. Dynamic Programming method
  3. Back Tracking method
  4. Branch & Bound

Greedy Method

 
A greedy algorithm is an algorithm that follows the problem solving met heuristic of making the locally optimal choice each stage with the hope of finding the global optimum.
 
The greedy method is a powerful technique used in the design of algorithms. Almost all problems that come under this category have 'n' inputs. A subset of the given set of inputs that satisfies some given constraints is to be obtained. Such a subset is called a feasible solution. Now the problem is to find a feasible solution that maximizes or maximizes a given objective function. Such a function is called an Optimal solution. The Greedy method works in stages.
  • Feasible- It should satisfy the problem's constraints
  • Locally Optimal- Among all feasible solutions the best choice is to be made.
  • Irrevocable-Once the particular choice is made then it should not get changed on subsequent steps. 
The greedy method can be characterized as being 'Short-sighted', and 'non-recoverable'. They are ideal only for problems that have optimal substructure. One way to construct a solution for such optimization problems is the greedy method . in which we construct the solution in stages. At each stage, we make a decision that appears to be the best according to certain greedy criteria, and will not be changed in later stages. Hence, each decision should assume the feasibility. Although the greedy method doesn't lead to an optimal solution. eg. in the traveling salesman problem, it does in some other cases.
 

Dynamic Programming Method

 
Dynamic programming is a problem-solving technique that, like divide and conquer, solves problems by dividing them into sub-problems. Dynamic programming is used when the sub-problems are not independent. eg. when they share the same sub-problems.
  • Is a method of solving complex problems by breaking them down into simpler steps. It is applicable to problems that exhibit the properties of overlapping subproblems and optimal substructure. When applicable, the method takes much less time than naive methods.
  • Is a technique for computing recurrence relations efficiently by sorting partial results. It is typically applied to optimization problems. In such problems, there can be many possible solutions. Each solution has a value, and wish to find a solution with the optimal value. Such a solution is called an optimal solution to the problem, since there may be several solutions that achieve the optimal value.
  • Principle of Optimality: To use dynamic programming, the problem must observe the principle of optimality, that whatever the initial state is, remaining decisions must be optimal with regard to the state following from the first decision must be optimal with regard to the state following from the first decision. Combinatorial problems may have this property but may use too much memory/time to be efficient. 

Approaches in Dynamic Programming

 
There are two approaches to solving a dynamic programming problem.
 
Bottom-up Approach
 
The bottom-up approach simply means storing the results of certain calculations, which are then re-used later because the same calculations is a sub-problem in a larger calculation.
 
Top-down Approach
 
The top-down approach involves formulating a complex calculation as a recursive series of simpler calculations.
 
Backtracking Method
 
Backtracking can be described as an organized exhaustive which offers avoids searching for all possibilities. The technique is generally suitable for solving problems where a potentially large but finite number of solutions have to inspected.
 
Backtracking can be applied only for problems that admit the concept of a partial candidate solution and a relatively quick test of whether it can possibly be completed to a valid solution.
 
Backtracking is an important tool for solving constraint satisfaction problems, such as crossword, verbal arithmetic, and many other puzzles. It is often the most convenient (If not them most efficient) technique for parsing for the knapsack problem and other combinational optimization problems. It is so the basis of the so-called logic programming languages such as Icon, Planner a Prolog.
 

Backtracking Problems

 
In backtracking, the problem can be categorized into three categories.
 
Decision Problem
 
In the decision problem we find “whether there are any feasible solutions?”
 
Optimization Problem
 
In optimization problems, we find “whether there exist any best solutions?”
 
Enumeration Problem
 
In the enumerations problem, we find all the possible feasible solutions.
 
Branch and Bound Method-Branch and bound is a state-space search method in which all the children of a node are generated before expending any of its children.
 

Branch and Bound Terminology

 
Live Node
 
A node that has been generated and all of whose children have not yet been generated is called a live node. The live node whose children are currently being generated is called the E-node.
 
E-node
 
A live node whose children are currently being explored. In other words, an E-node is a node currently being expended.
 
Dead node
 
The dead node is a generated node that is not to be expended further or all of whose children have been generated.
 
Bounding functions
 
Bounding functions are used to kill live nodes without generating all their children. This is done carefully that at the conclusion of the process at least one answer node is always generated or all answer nodes are generated if the problem requires finding all solutions. 
 

Conclusion

 
In this article we have learned about the knapsack problem, its types, formulas, and the methods to solve this problem. The knapsack problem is a way to solve a problem in such a way so that the capacity constraint of the knapsack doesn't break and we receive maximum profit. In the next article, we will see it’s the first approach in detail to solve this problem.


Similar Articles