Continuous Knapsack Problem
   HOME

TheInfoList



OR:

In
theoretical computer science Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumsc ...
, the continuous knapsack problem (also known as the fractional knapsack problem) is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
ic problem in
combinatorial optimization Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combi ...
in which the goal is to fill a container (the "knapsack") with fractional amounts of different materials chosen to maximize the value of the selected materials... It resembles the classic
knapsack problem The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit an ...
, in which the items to be placed in the container are indivisible; however, the continuous knapsack problem may be solved in
polynomial time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
whereas the classic knapsack problem is
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
. It is a classic example of how a seemingly small change in the formulation of a problem can have a large impact on its
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
.


Problem definition

An instance of either the continuous or classic knapsack problems may be specified by the numerical capacity of the knapsack, together with a collection of materials, each of which has two numbers associated with it: the weight of material that is available to be selected and the total value of that material. The goal is to choose an amount of each material, subject to the capacity constraint \sum_i x_i \le W and maximizing the total benefit \sum_i x_i v_i. In the classic knapsack problem, each of the amounts must be either zero or ; the continuous knapsack problem differs by allowing to range continuously from zero to . Some formulations of this problem rescale the variables to be in the range from 0 to 1. In this case the capacity constraint becomes \sum_i x_i w_i \leq W, and the goal is to maximize the total benefit \sum_i x_i v_i.


Solution technique

The continuous knapsack problem may be solved by a
greedy algorithm A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally ...
, first published in 1957 by
George Dantzig George Bernard Dantzig (; November 8, 1914 – May 13, 2005) was an American mathematical scientist who made contributions to industrial engineering, operations research, computer science, economics, and statistics. Dantzig is known for his dev ...
,. that considers the materials in sorted order by their values per unit weight. For each material, the amount ''xi'' is chosen to be as large as possible: *If the sum of the choices made so far equals the capacity ''W'', then the algorithm sets ''xi'' = 0. *If the difference ''d'' between the sum of the choices made so far and ''W'' is smaller than ''wi'', then the algorithm sets ''xi'' = ''d''. *In the remaining case, the algorithm chooses ''xi'' = ''wi''. Because of the need to sort the materials, this algorithm takes time ''O''(''n'' log ''n'') on inputs with ''n'' materials. However, by adapting an algorithm for finding
weighted median In statistics, a weighted median of a sample is the 50% weighted percentile. It was first proposed by F. Y. Edgeworth in 1888. Like the median, it is useful as an estimator of central tendency, robust against outliers. It allows for non-uniform ...
s, it is possible to solve the problem in time ''O''(''n'').


References

{{reflist Combinatorial optimization