The bin packing problem [1] [2] [3] [4] is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media, splitting a network prefix into multiple subnets, [5] and technology mapping in FPGA semiconductor chip design.
Computationally, the problem is NP-hard, and the corresponding decision problem, deciding if items can fit into a specified number of bins, is NP-complete. Despite its worst-case hardness, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many approximation algorithms exist. For example, the first fit algorithm provides a fast but often non-optimal solution, involving placing each item into the first bin in which it will fit. It requires Θ(n log n) time, where n is the number of items to be packed. The algorithm can be made much more effective by first sorting the list of items into decreasing order (sometimes known as the first-fit decreasing algorithm), although this still does not guarantee an optimal solution and for longer lists may increase the running time of the algorithm. It is known, however, that there always exists at least one ordering of items that allows first-fit to produce an optimal solution. [6]
There are many variations of this problem, such as 2D packing, linear packing, packing by weight, packing by cost, and so on. The bin packing problem can also be seen as a special case of the cutting stock problem. When the number of bins is restricted to 1 and each item is characterized by both a volume and a value, the problem of maximizing the value of items that can fit in the bin is known as the knapsack problem.
A variant of bin packing that occurs in practice is when items can share space when packed into a bin. Specifically, a set of items could occupy less space when packed together than the sum of their individual sizes. This variant is known as VM packing [7] since when virtual machines (VMs) are packed in a server, their total memory requirement could decrease due to pages shared by the VMs that need only be stored once. If items can share space in arbitrary ways, the bin packing problem is hard to even approximate. However, if space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated.
Another variant of bin packing of interest in practice is the so-called online bin packing. Here the items of different volume are supposed to arrive sequentially, and the decision maker has to decide whether to select and pack the currently observed item, or else to let it pass. Each decision is without recall. In contrast, offline bin packing allows rearranging the items in the hope of achieving a better packing once additional items arrive. This of course requires additional storage for holding the items to be rearranged.
In Computers and Intractability [8]: 226 Garey and Johnson list the bin packing problem under the reference [SR1]. They define its decision variant as follows.
Instance: Finite set of items, a size for each , a positive integer bin capacity , and a positive integer .
Question: Is there a partition of into disjoint sets such that the sum of the sizes of the items in each is or less?
Note that in the literature often an equivalent notation is used, where and for each . Furthermore, research is mostly interested in the optimization variant, which asks for the smallest possible value of . A solution is optimal if it has minimal . The -value for an optimal solution for a set of items is denoted by or just if the set of items is clear from the context.
A possible integer linear programming formulation of the problem is:
minimize | ||
subject to | ||
where if bin is used and if item is put into bin . [9]
The bin packing problem is strongly NP-complete. This can be proven by reducing the strongly NP-complete 3-partition problem to bin packing. [8]
Furthermore, there can be no approximation algorithm with absolute approximation ratio smaller than unless . This can be proven by a reduction from the partition problem: [10] given an instance of Partition where the sum of all input numbers is , construct an instance of bin-packing in which the bin size is T. If there exists an equal partition of the inputs, then the optimal packing needs 2 bins; therefore, every algorithm with an approximation ratio smaller than 3/2 must return less than 3 bins, which must be 2 bins. In contrast, if there is no equal partition of the inputs, then the optimal packing needs at least 3 bins.
On the other hand, bin packing is solvable in pseudo-polynomial time for any fixed number of bins K, and solvable in polynomial time for any fixed bin capacity B. [8]
To measure the performance of an approximation algorithm there are two approximation ratios considered in the literature. For a given list of items the number denotes the number of bins used when algorithm is applied to list , while denotes the optimum number for this list. The absolute worst-case performance ratio for an algorithm is defined as
On the other hand, the asymptotic worst-case ratio is defined as
Equivalently, is the smallest number such that there exists some constant K, such that for all lists L: [4]
Additionally, one can restrict the lists to those for which all items have a size of at most . For such lists, the bounded size performance ratios are denoted as and .
Approximation algorithms for bin packing can be classified into two categories:
In the online version of the bin packing problem, the items arrive one after another and the (irreversible) decision where to place an item has to be made before knowing the next item or even if there will be another one. A diverse set of offline and online heuristics for bin-packing have been studied by David S. Johnson on his Ph.D. thesis. [11]
There are many simple algorithms that use the following general scheme:
The algorithms differ in the criterion by which they choose the open bin for the new item in step 1 (see the linked pages for more information):
In order to generalize these results, Johnson introduced two classes of online heuristics called any-fit algorithm and almost-any-fit algorithm: [4]: 470
Better approximation ratios are possible with heuristics that are not AnyFit. These heuristics usually keep several classes of open bins, devoted to items of different size ranges (see the linked pages for more information):
Yao [15] proved in 1980 that there can be no online algorithm with an asymptotic competitive ratio smaller than . Brown [17] and Liang [18] improved this bound to 1.53635. Afterward, this bound was improved to 1.54014 by Vliet. [19] In 2012, this lower bound was again improved by Békési and Galambos [20] to .
Algorithm | Approximation guarantee | Worst case list | Time-complexity |
---|---|---|---|
Next-fit (NF) | [11] | [11] | |
First-fit (FF) | [13] | [13] | [11] |
Best-fit (BF) | [14] | [14] | [11] |
Worst-Fit (WF) | [11] | [11] | [11] |
Almost-Worst-Fit (AWF) | [11] | [11] | [11] |
Refined-First-Fit (RFF) | [15] | (for ) [15] | [15] |
Harmonic-k (Hk) | for [16] | [16] | [16] |
Refined Harmonic (RH) | [16] | [16] | |
Modified Harmonic (MH) | [21] | ||
Modified Harmonic 2 (MH2) | [21] | ||
Harmonic + 1 (H+1) | [22] | ||
Harmonic ++ (H++) | [22] | [22] |
In the offline version of bin packing, the algorithm can see all the items before starting to place them into bins. This allows to attain improved approximation ratios.
The simplest technique used by offline approximation schemes is the following:
Johnson [11] proved that any AnyFit scheme A that runs on a list ordered by descending size has an asymptotic approximation ratio of
.
Some methods in this family are (see the linked pages for more information):
Fernandez de la Vega and Lueker [29] presented a PTAS for bin packing. For every , their algorithm finds a solution with size at most and runs in time , where denotes a function only dependent on . For this algorithm, they invented the method of adaptive input rounding: the input numbers are grouped and rounded up to the value of the maximum in each group. This yields an instance with a small number of different sizes, which can be solved exactly using the configuration linear program. [30]
The Karmarkar-Karp bin packing algorithm finds a solution with size at most , and runs in time polynomial in n (the polynomial has a high degree, at least 8).
Rothvoss [31] presented an algorithm that generates a solution with at most bins.
Hoberg and Rothvoss [32] improved this algorithm to generate a solution with at most bins. The algorithm is randomized, and its running-time is polynomial in n.
Algorithm | Approximation guarantee | Worst case instance |
---|---|---|
First-fit-decreasing (FFD) | [23] | [23] |
Modified-first-fit-decreasing (MFFD) | [28] | [27] |
Karmarkar and Karp | [33] | |
Rothvoss | [31] | |
Hoberg and Rothvoss | [32] |
Martello and Toth [34] developed an exact algorithm for the 1-dimensional bin-packing problem, called MTP. A faster alternative is the Bin Completion algorithm proposed by Korf in 2002 [35] and later improved. [36]
A further improvement was presented by Schreiber and Korf in 2013. [37] The new Improved Bin Completion algorithm is shown to be up to five orders of magnitude faster than Bin Completion on non-trivial problems with 100 items, and outperforms the BCP (branch-and-cut-and-price) algorithm by Belov and Scheithauer on problems that have fewer than 20 bins as the optimal solution. Which algorithm performs best depends on problem properties like the number of items, the optimal number of bins, unused space in the optimal solution and value precision.
A special case of bin packing is when there is a small number d of different item sizes. There can be many different items of each size. This case is also called high-multiplicity bin packing, and It admits more efficient algorithms than the general problem.
Bin-packing with fragmentation or fragmentable object bin-packing is a variant of the bin packing problem in which it is allowed to break items into parts and put each part separately on a different bin. Breaking items into parts may allow for improving the overall performance, for example, minimizing the number of total bin. Moreover, the computational problem of finding an optimal schedule may become easier, as some of the optimization variables become continuous. On the other hand, breaking items apart might be costly. The problem was first introduced by Mandal, Chakrabary and Ghose. [38]
The problem has two main variants.
Mandal, Chakrabary and Ghose [38] proved that BP-SPF is NP-hard.
Menakerman and Rom [39] showed that BP-SIF and BP-SPF are both strongly NP-hard. Despite the hardness, they present several algorithms and investigate their performance. Their algorithms use classic algorithms for bin-packing, like next-fit and first-fit decreasing, as a basis for their algorithms.
Bertazzi, Golden and Wang [40] introduced a variant of BP-SIF with split rule: an item is allowed to be split in only one way according to its size. It is useful for the vehicle routing problem for example. In their paper, they provide the worst-case performance bound of the variant.
Shachnai, Tamir and Yehezkeli [41] developed approximation schemes for BP-SIF and BP-SPF; a dual PTAS (a PTAS for the dual version of the problem), an asymptotic PTAS called APTAS, and a dual asymptotic FPTAS called AFPTAS for both versions.
Ekici [42] introduced a variant of BP-SPF in which some items are in conflict, and it is forbidden to pack fragments of conflicted items into the same bin. They proved that this variant, too, is NP-hard.
Cassazza and Ceselli [43] introduced a variant with no cost and no overhead, and the number of bins is fixed. However, the number of fragmentations should be minimized. They present mathematical programming algorithms for both exact and approximate solutions.
The problem of fractional knapsack with penalties was introduced by Malaguti, Monaci, Paronuzzi and Pferschy. [44] They developed an FPTAS and a dynamic program for the problem, and they showed an extensive computational study comparing the performance of their models. See also: Fractional job scheduling.
An important special case of bin packing is that the item sizes form a divisible sequence (also called factored). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. If the item sizes are divisible, then some of the heuristic algorithms for bin packing find an optimal solution. [45]
There is a variant of bin packing in which there are cardinality constraints on the bins: each bin can contain at most k items, for some fixed integer k.
There are various ways to extend the bin-packing model to more general cost and load functions:
In the bin packing problem, the size of the bins is fixed and their number can be enlarged (but should be as small as possible).
In contrast, in the multiway number partitioning problem, the number of bins is fixed and their size can be enlarged. The objective is to find a partition in which the bin sizes are as nearly equal is possible (in the variant called multiprocessor scheduling problem or minimum makespan problem, the goal is specifically to minimize the size of the largest bin).
In the inverse bin packing problem, [50] both the number of bins and their sizes are fixed, but the item sizes can be changed. The objective is to achieve the minimum perturbation to the item size vector so that all the items can be packed into the prescribed number of bins.
In the maximum resource bin packing problem, [51] the goal is to maximize the number of bins used, such that, for some ordering of the bins, no item in a later bin fits in an earlier bin. In a dual problem, the number of bins is fixed, and the goal is to minimize the total number or the total size of items placed into the bins, such that no remaining item fits into an unfilled bin.
In the bin covering problem, the bin size is bounded from below: the goal is to maximize the number of bins used such that the total size in each bin is at least a given threshold.
In the fair indivisible chore allocation problem (a variant of fair item allocation), the items represent chores, and there are different people each of whom attributes a different difficulty-value to each chore. The goal is to allocate to each person a set of chores with an upper bound on its total difficulty-value (thus, each person corresponds to a bin). Many techniques from bin packing are used in this problem too. [52]
In the guillotine cutting problem, both the items and the "bins" are two-dimensional rectangles rather than one-dimensional numbers, and the items have to be cut from the bin using end-to-end cuts.
In the selfish bin packing problem, each item is a player who wants to minimize its cost. [53]
There is also a variant of bin packing in which the cost that should be minimized is not the number of bins, but rather a certain concave function of the number of items in each bin. [48]
Other variants are two-dimensional bin packing, [54] three-dimensional bin packing, [55] bin packing with delivery, [56]
{{
cite book}}
: |journal=
ignored (
help)
{{
cite journal}}
: CS1 maint: multiple names: authors list (
link)
Covering/packing-problem pairs | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||||||||||
The bin packing problem [1] [2] [3] [4] is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media, splitting a network prefix into multiple subnets, [5] and technology mapping in FPGA semiconductor chip design.
Computationally, the problem is NP-hard, and the corresponding decision problem, deciding if items can fit into a specified number of bins, is NP-complete. Despite its worst-case hardness, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many approximation algorithms exist. For example, the first fit algorithm provides a fast but often non-optimal solution, involving placing each item into the first bin in which it will fit. It requires Θ(n log n) time, where n is the number of items to be packed. The algorithm can be made much more effective by first sorting the list of items into decreasing order (sometimes known as the first-fit decreasing algorithm), although this still does not guarantee an optimal solution and for longer lists may increase the running time of the algorithm. It is known, however, that there always exists at least one ordering of items that allows first-fit to produce an optimal solution. [6]
There are many variations of this problem, such as 2D packing, linear packing, packing by weight, packing by cost, and so on. The bin packing problem can also be seen as a special case of the cutting stock problem. When the number of bins is restricted to 1 and each item is characterized by both a volume and a value, the problem of maximizing the value of items that can fit in the bin is known as the knapsack problem.
A variant of bin packing that occurs in practice is when items can share space when packed into a bin. Specifically, a set of items could occupy less space when packed together than the sum of their individual sizes. This variant is known as VM packing [7] since when virtual machines (VMs) are packed in a server, their total memory requirement could decrease due to pages shared by the VMs that need only be stored once. If items can share space in arbitrary ways, the bin packing problem is hard to even approximate. However, if space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated.
Another variant of bin packing of interest in practice is the so-called online bin packing. Here the items of different volume are supposed to arrive sequentially, and the decision maker has to decide whether to select and pack the currently observed item, or else to let it pass. Each decision is without recall. In contrast, offline bin packing allows rearranging the items in the hope of achieving a better packing once additional items arrive. This of course requires additional storage for holding the items to be rearranged.
In Computers and Intractability [8]: 226 Garey and Johnson list the bin packing problem under the reference [SR1]. They define its decision variant as follows.
Instance: Finite set of items, a size for each , a positive integer bin capacity , and a positive integer .
Question: Is there a partition of into disjoint sets such that the sum of the sizes of the items in each is or less?
Note that in the literature often an equivalent notation is used, where and for each . Furthermore, research is mostly interested in the optimization variant, which asks for the smallest possible value of . A solution is optimal if it has minimal . The -value for an optimal solution for a set of items is denoted by or just if the set of items is clear from the context.
A possible integer linear programming formulation of the problem is:
minimize | ||
subject to | ||
where if bin is used and if item is put into bin . [9]
The bin packing problem is strongly NP-complete. This can be proven by reducing the strongly NP-complete 3-partition problem to bin packing. [8]
Furthermore, there can be no approximation algorithm with absolute approximation ratio smaller than unless . This can be proven by a reduction from the partition problem: [10] given an instance of Partition where the sum of all input numbers is , construct an instance of bin-packing in which the bin size is T. If there exists an equal partition of the inputs, then the optimal packing needs 2 bins; therefore, every algorithm with an approximation ratio smaller than 3/2 must return less than 3 bins, which must be 2 bins. In contrast, if there is no equal partition of the inputs, then the optimal packing needs at least 3 bins.
On the other hand, bin packing is solvable in pseudo-polynomial time for any fixed number of bins K, and solvable in polynomial time for any fixed bin capacity B. [8]
To measure the performance of an approximation algorithm there are two approximation ratios considered in the literature. For a given list of items the number denotes the number of bins used when algorithm is applied to list , while denotes the optimum number for this list. The absolute worst-case performance ratio for an algorithm is defined as
On the other hand, the asymptotic worst-case ratio is defined as
Equivalently, is the smallest number such that there exists some constant K, such that for all lists L: [4]
Additionally, one can restrict the lists to those for which all items have a size of at most . For such lists, the bounded size performance ratios are denoted as and .
Approximation algorithms for bin packing can be classified into two categories:
In the online version of the bin packing problem, the items arrive one after another and the (irreversible) decision where to place an item has to be made before knowing the next item or even if there will be another one. A diverse set of offline and online heuristics for bin-packing have been studied by David S. Johnson on his Ph.D. thesis. [11]
There are many simple algorithms that use the following general scheme:
The algorithms differ in the criterion by which they choose the open bin for the new item in step 1 (see the linked pages for more information):
In order to generalize these results, Johnson introduced two classes of online heuristics called any-fit algorithm and almost-any-fit algorithm: [4]: 470
Better approximation ratios are possible with heuristics that are not AnyFit. These heuristics usually keep several classes of open bins, devoted to items of different size ranges (see the linked pages for more information):
Yao [15] proved in 1980 that there can be no online algorithm with an asymptotic competitive ratio smaller than . Brown [17] and Liang [18] improved this bound to 1.53635. Afterward, this bound was improved to 1.54014 by Vliet. [19] In 2012, this lower bound was again improved by Békési and Galambos [20] to .
Algorithm | Approximation guarantee | Worst case list | Time-complexity |
---|---|---|---|
Next-fit (NF) | [11] | [11] | |
First-fit (FF) | [13] | [13] | [11] |
Best-fit (BF) | [14] | [14] | [11] |
Worst-Fit (WF) | [11] | [11] | [11] |
Almost-Worst-Fit (AWF) | [11] | [11] | [11] |
Refined-First-Fit (RFF) | [15] | (for ) [15] | [15] |
Harmonic-k (Hk) | for [16] | [16] | [16] |
Refined Harmonic (RH) | [16] | [16] | |
Modified Harmonic (MH) | [21] | ||
Modified Harmonic 2 (MH2) | [21] | ||
Harmonic + 1 (H+1) | [22] | ||
Harmonic ++ (H++) | [22] | [22] |
In the offline version of bin packing, the algorithm can see all the items before starting to place them into bins. This allows to attain improved approximation ratios.
The simplest technique used by offline approximation schemes is the following:
Johnson [11] proved that any AnyFit scheme A that runs on a list ordered by descending size has an asymptotic approximation ratio of
.
Some methods in this family are (see the linked pages for more information):
Fernandez de la Vega and Lueker [29] presented a PTAS for bin packing. For every , their algorithm finds a solution with size at most and runs in time , where denotes a function only dependent on . For this algorithm, they invented the method of adaptive input rounding: the input numbers are grouped and rounded up to the value of the maximum in each group. This yields an instance with a small number of different sizes, which can be solved exactly using the configuration linear program. [30]
The Karmarkar-Karp bin packing algorithm finds a solution with size at most , and runs in time polynomial in n (the polynomial has a high degree, at least 8).
Rothvoss [31] presented an algorithm that generates a solution with at most bins.
Hoberg and Rothvoss [32] improved this algorithm to generate a solution with at most bins. The algorithm is randomized, and its running-time is polynomial in n.
Algorithm | Approximation guarantee | Worst case instance |
---|---|---|
First-fit-decreasing (FFD) | [23] | [23] |
Modified-first-fit-decreasing (MFFD) | [28] | [27] |
Karmarkar and Karp | [33] | |
Rothvoss | [31] | |
Hoberg and Rothvoss | [32] |
Martello and Toth [34] developed an exact algorithm for the 1-dimensional bin-packing problem, called MTP. A faster alternative is the Bin Completion algorithm proposed by Korf in 2002 [35] and later improved. [36]
A further improvement was presented by Schreiber and Korf in 2013. [37] The new Improved Bin Completion algorithm is shown to be up to five orders of magnitude faster than Bin Completion on non-trivial problems with 100 items, and outperforms the BCP (branch-and-cut-and-price) algorithm by Belov and Scheithauer on problems that have fewer than 20 bins as the optimal solution. Which algorithm performs best depends on problem properties like the number of items, the optimal number of bins, unused space in the optimal solution and value precision.
A special case of bin packing is when there is a small number d of different item sizes. There can be many different items of each size. This case is also called high-multiplicity bin packing, and It admits more efficient algorithms than the general problem.
Bin-packing with fragmentation or fragmentable object bin-packing is a variant of the bin packing problem in which it is allowed to break items into parts and put each part separately on a different bin. Breaking items into parts may allow for improving the overall performance, for example, minimizing the number of total bin. Moreover, the computational problem of finding an optimal schedule may become easier, as some of the optimization variables become continuous. On the other hand, breaking items apart might be costly. The problem was first introduced by Mandal, Chakrabary and Ghose. [38]
The problem has two main variants.
Mandal, Chakrabary and Ghose [38] proved that BP-SPF is NP-hard.
Menakerman and Rom [39] showed that BP-SIF and BP-SPF are both strongly NP-hard. Despite the hardness, they present several algorithms and investigate their performance. Their algorithms use classic algorithms for bin-packing, like next-fit and first-fit decreasing, as a basis for their algorithms.
Bertazzi, Golden and Wang [40] introduced a variant of BP-SIF with split rule: an item is allowed to be split in only one way according to its size. It is useful for the vehicle routing problem for example. In their paper, they provide the worst-case performance bound of the variant.
Shachnai, Tamir and Yehezkeli [41] developed approximation schemes for BP-SIF and BP-SPF; a dual PTAS (a PTAS for the dual version of the problem), an asymptotic PTAS called APTAS, and a dual asymptotic FPTAS called AFPTAS for both versions.
Ekici [42] introduced a variant of BP-SPF in which some items are in conflict, and it is forbidden to pack fragments of conflicted items into the same bin. They proved that this variant, too, is NP-hard.
Cassazza and Ceselli [43] introduced a variant with no cost and no overhead, and the number of bins is fixed. However, the number of fragmentations should be minimized. They present mathematical programming algorithms for both exact and approximate solutions.
The problem of fractional knapsack with penalties was introduced by Malaguti, Monaci, Paronuzzi and Pferschy. [44] They developed an FPTAS and a dynamic program for the problem, and they showed an extensive computational study comparing the performance of their models. See also: Fractional job scheduling.
An important special case of bin packing is that the item sizes form a divisible sequence (also called factored). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. If the item sizes are divisible, then some of the heuristic algorithms for bin packing find an optimal solution. [45]
There is a variant of bin packing in which there are cardinality constraints on the bins: each bin can contain at most k items, for some fixed integer k.
There are various ways to extend the bin-packing model to more general cost and load functions:
In the bin packing problem, the size of the bins is fixed and their number can be enlarged (but should be as small as possible).
In contrast, in the multiway number partitioning problem, the number of bins is fixed and their size can be enlarged. The objective is to find a partition in which the bin sizes are as nearly equal is possible (in the variant called multiprocessor scheduling problem or minimum makespan problem, the goal is specifically to minimize the size of the largest bin).
In the inverse bin packing problem, [50] both the number of bins and their sizes are fixed, but the item sizes can be changed. The objective is to achieve the minimum perturbation to the item size vector so that all the items can be packed into the prescribed number of bins.
In the maximum resource bin packing problem, [51] the goal is to maximize the number of bins used, such that, for some ordering of the bins, no item in a later bin fits in an earlier bin. In a dual problem, the number of bins is fixed, and the goal is to minimize the total number or the total size of items placed into the bins, such that no remaining item fits into an unfilled bin.
In the bin covering problem, the bin size is bounded from below: the goal is to maximize the number of bins used such that the total size in each bin is at least a given threshold.
In the fair indivisible chore allocation problem (a variant of fair item allocation), the items represent chores, and there are different people each of whom attributes a different difficulty-value to each chore. The goal is to allocate to each person a set of chores with an upper bound on its total difficulty-value (thus, each person corresponds to a bin). Many techniques from bin packing are used in this problem too. [52]
In the guillotine cutting problem, both the items and the "bins" are two-dimensional rectangles rather than one-dimensional numbers, and the items have to be cut from the bin using end-to-end cuts.
In the selfish bin packing problem, each item is a player who wants to minimize its cost. [53]
There is also a variant of bin packing in which the cost that should be minimized is not the number of bins, but rather a certain concave function of the number of items in each bin. [48]
Other variants are two-dimensional bin packing, [54] three-dimensional bin packing, [55] bin packing with delivery, [56]
{{
cite book}}
: |journal=
ignored (
help)
{{
cite journal}}
: CS1 maint: multiple names: authors list (
link)