Bin Covering Problem
   HOME
*





Bin Covering Problem
In the bin covering problem, items of different sizes must be packed into a finite number of bins or containers, each of which must contain ''at least'' a certain given total size, in a way that maximizes the number of bins used. This problem is a dual of the bin packing problem: in bin covering, the bin sizes are bounded from below and the goal is to maximize their number; in bin packing, the bin sizes are bounded ''from above'' and the goal is to ''minimize'' their number. The problem is NP-hard, but there are various efficient approximation algorithms: * Algorithms covering at least 1/2, 2/3 or 3/4 of the optimum bin count asymptotically, running in time O(n), O(n \log n), O(n ^2 n) respectively. * An asymptotic PTAS, algorithms with bounded worst-case behavior whose expected behavior is asymptotically-optimal for some discrete distributions, and a learning algorithm with asymptotically optimal expected behavior for all discrete distributions. * An asymptotic FPTAS. The b ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Bin Packing Problem
The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media and technology mapping in FPGA semiconductor chip design. Computationally, the problem is NP-hard, and the corresponding decision problem - deciding if items can fit into a specified number of bins - is NP-complete. Despite its worst-case hardness, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many approximation algorithms exist. For example, the first fit algorithm provides a fast but often non-optimal solution, involving placing each item into the first bin in which it will fit. It requires '' Θ''(''n'' log ''n'') time, where ''n' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

NP-hardness
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem. A more precise specification is: a problem ''H'' is NP-hard when every problem ''L'' in NP can be reduced in polynomial time to ''H''; that is, assuming a solution for ''H'' takes 1 unit time, ''H''s solution can be used to solve ''L'' in polynomial time. As a consequence, finding a polynomial time algorithm to solve any NP-hard problem would give polynomial time algorithms for all the problems in NP. As it is suspected that P≠NP, it is unlikely that such an algorithm exists. It is suspected that there are no polynomial-time algorithms for NP-hard problems, but that has not been proven. Moreover, the class P, in which all problems can be solved in polynomial time, is contained in the NP class. Defi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Approximation Algorithm
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial-time Approximation Scheme
In computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems). A PTAS is an algorithm which takes an instance of an optimization problem and a parameter and produces a solution that is within a factor of being optimal (or for maximization problems). For example, for the Euclidean traveling salesman problem, a PTAS would produce a tour with length at most , with being the length of the shortest tour. The running time of a PTAS is required to be polynomial in the problem size for every fixed ε, but can be different for different ε. Thus an algorithm running in time or even counts as a PTAS. Variants Deterministic A practical problem with PTAS algorithms is that the exponent of the polynomial could increase dramatically as ε shrinks, for example if the runtime is . One way of addressing this is to define the efficient polynomial-time a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Fully Polynomial-time Approximation Scheme
A fully polynomial-time approximation scheme (FPTAS) is an algorithm for finding approximate solutions to function problems, especially optimization problems. An FPTAS takes as input an instance of the problem and a parameter ε > 0. It returns as output a value is at least 1-\epsilon times the correct value, and at most 1 + \epsilon times the correct value. In the context of optimization problems, the correct value is understood to be the value of the optimal solution, and it is often implied that an FPTAS should produce a valid solution (and not just the value of the solution). Returning a value and finding a solution with that value are equivalent assuming that the problem possesses self reducibility. Importantly, the run-time of an FPTAS is polynomial in the problem size and in 1/ε. This is in contrast to a general polynomial-time approximation scheme (PTAS). The run-time of a general PTAS is polynomial in the problem size for each specific ε, but might be exponent ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Configuration Linear Program
The configuration linear program (configuration-LP) is a particular linear programming used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem.Gilmore P. C., R. E. Gomory (1961). A linear programming approach to the cutting-stock problem'. Operations Research 9: 849-859 Later, it has been applied to bin packing and job scheduling. In the configuration-LP, there is a variable for each possible ''configuration'' - each possible multiset of items that can fit in a single bin (these configurations are also known as ''patterns'') . Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations. In bin packing The integral LP In the bin packing problem, there are ''n'' items with different sizes. The goal is to pack the items into a minimum number of bins, where each bin can contain at most ''B''. A ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Inversion
In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix is uniquely determined by , and is called the (multiplicative) ''inverse'' of , denoted by . Matrix inversion is the process of finding the matrix that satisfies the prior equation for a given invertible matrix . A square matrix that is ''not'' invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any finite region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices (-by- matrices for which ) do not hav ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Fair Item Allocation
Fair item allocation is a kind of a fair division problem in which the items to divide are ''discrete'' rather than continuous. The items have to be divided among several partners who value them differently, and each item has to be given as a whole to a single person. This situation arises in various real-life scenarios: * Several heirs want to divide the inherited property, which contains e.g. a house, a car, a piano and several paintings. * Several lecturers want to divide the courses given in their faculty. Each lecturer can teach one or more whole courses. *White elephant gift exchange parties The indivisibility of the items implies that a fair division may not be possible. As an extreme example, if there is only a single item (e.g. a house), it must be given to a single partner, but this is not fair to the other partners. This is in contrast to the fair cake-cutting problem, where the dividend is divisible and a fair division always exists. In some cases, the indivisibility pr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Optimization Algorithms And Methods
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. More generally, optimization includes finding "best available" values of some objective function given a define ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]