Fair random assignment (also called probabilistic one-sided matching) is a kind of a fair division problem.
In an assignment problem (also called house-allocation problem or one-sided matching), there are m objects and they have to be allocated among n agents, such that each agent receives at most one object. Examples include the assignment of jobs to workers, rooms to housemates, dormitories to students, time-slots to users of a common machine, and so on.
In general, a fair assignment may be impossible to attain. For example, if Alice and Batya both prefer the eastern room to the western room, only one of them will get it and the other will be envious. In the random assignment setting, fairness is attained using a lottery. So in the simple example above, Alice and Batya will toss a fair coin and the winner will get the eastern room.
Random assignment is mentioned already in the Bible: a lottery was used to allocate the lands of Canaan among the Tribes of Israel (Numbers 26:55).
In the US, lotteries were used to assign public lands to homesteaders (e.g. Oklahoma in 1901), and to assign radio spectra to broadcasters (e.g. FCC 1981-1993). Lottery is still used to assign green cards. [1]
There are several ways to extend the "coin toss" method to situations in which there are more than two agents, and they may have different preference relations on the objects:
One desired property of a random assignment rule is Pareto efficiency (PE). There are three variants of PE:
For PE, the implications are: ex-ante → sd(possible) → ex-post.
Another desired property is envy-freeness (EF). Again, there are three variants of EF:
For EF, the implication direction is opposite to that of efficiency: ex-post → sd(necessary) → ex-ante.
A third desired property is truthfulness (also called strategyproofness). Again, there are three variants:
The following table compares the various rules' properties (the RP and PS columns are based on [6]):
#items ≤ #agents | #items > #agents | |||||
---|---|---|---|---|---|---|
RP | PS | CEEI | RP | PS | CEEI | |
Efficiency: | Ex-post PE | Possible PE | ex-ante PE | No | possible PE | ex-ante PE |
Fairness: | Weak sd-EF;
ld-EF |
Necessary EF | ex-ante EF | Weak sd-EF | sd-EF | EF |
Truthfulness: | Necessary truthful | Possible sd-truthful; ld-truthful [strict prefs]
None [weak prefs] |
No | sd-truthful* | No | No |
Some combinations of the above three properties cannot be simultaneously satisfied by any mechanism:
Both the PS and the CEEI rules compute a matrix of expected assignments, i.e., the marginal probabilities with which each agent receives each object. However, since the final allocation must be a matching, one must find a decomposition of this matrix into a lottery on matchings.
In the classic setting, in which m=n, this can be done using the Birkhoff algorithm. It can decompose any n-by-n matrix of agent-object probabilities into a convex combination of O(n2) permutation matrices, each of which represents a matching. However, the decomposition is not unique, and some decompositions may be better than others.
Budish, Che, Kojima and Milgrom [1] generalize Birkhoff's algorithm to arbitrary m and n. They also allow to add constraints on the assignments, under a maximal set of conditions on the set of constraints. They also present a decomposition method that minimizes the variance in the utility experienced by the agents between the different matchings.
Demeulemeester, Goossens, Hermans and Leus [8] present a polynomial-time decomposition algorithm that maximizes the worst-case number of agents who receive an object. Their algorithm guarantees that the worst-case number of agents equals the expected number of agents rounded down, which is the best possible. They present another decomposition algorithm that maximizes the worst-case number of assigned agents while guaranteeing that all matchings in the decomposition be ex-post PE; the second algorithm can be used only for fractional assignments outputted by PS, but not those corresponding to RP. For RP, it is only possible to attain a 1/2-factor approximation to the optimal worst-case number of assigned agents. For general fractional assignments, maximizing the worst-case number of assigned agents subject to ex-post PE is NP-hard. They also present a column generation framework that can be used to optimize other worst-case criteria.
Hosseini, Larson and Cohen [6] compare RP to PS in various settings. They show that:
Tao and Cole [9] study the existence of PE and EF random allocations when the utilities are non-linear (can have complements).
Yilmaz [10] studies the random assignment problem where agents have endowments.
Shen, Wang, Zhu, Fain and Munagala [11] study the random assignment problem when agents have priorities (agents with higher priorities should get their preferred goods before agents with lower priorities), but the priorities are uncertain.
Duddy [12] studies egalitarian random assignment.
{{
cite book}}
: CS1 maint: multiple names: authors list (
link)
{{
cite journal}}
: CS1 maint: multiple names: authors list (
link)
Fair random assignment (also called probabilistic one-sided matching) is a kind of a fair division problem.
In an assignment problem (also called house-allocation problem or one-sided matching), there are m objects and they have to be allocated among n agents, such that each agent receives at most one object. Examples include the assignment of jobs to workers, rooms to housemates, dormitories to students, time-slots to users of a common machine, and so on.
In general, a fair assignment may be impossible to attain. For example, if Alice and Batya both prefer the eastern room to the western room, only one of them will get it and the other will be envious. In the random assignment setting, fairness is attained using a lottery. So in the simple example above, Alice and Batya will toss a fair coin and the winner will get the eastern room.
Random assignment is mentioned already in the Bible: a lottery was used to allocate the lands of Canaan among the Tribes of Israel (Numbers 26:55).
In the US, lotteries were used to assign public lands to homesteaders (e.g. Oklahoma in 1901), and to assign radio spectra to broadcasters (e.g. FCC 1981-1993). Lottery is still used to assign green cards. [1]
There are several ways to extend the "coin toss" method to situations in which there are more than two agents, and they may have different preference relations on the objects:
One desired property of a random assignment rule is Pareto efficiency (PE). There are three variants of PE:
For PE, the implications are: ex-ante → sd(possible) → ex-post.
Another desired property is envy-freeness (EF). Again, there are three variants of EF:
For EF, the implication direction is opposite to that of efficiency: ex-post → sd(necessary) → ex-ante.
A third desired property is truthfulness (also called strategyproofness). Again, there are three variants:
The following table compares the various rules' properties (the RP and PS columns are based on [6]):
#items ≤ #agents | #items > #agents | |||||
---|---|---|---|---|---|---|
RP | PS | CEEI | RP | PS | CEEI | |
Efficiency: | Ex-post PE | Possible PE | ex-ante PE | No | possible PE | ex-ante PE |
Fairness: | Weak sd-EF;
ld-EF |
Necessary EF | ex-ante EF | Weak sd-EF | sd-EF | EF |
Truthfulness: | Necessary truthful | Possible sd-truthful; ld-truthful [strict prefs]
None [weak prefs] |
No | sd-truthful* | No | No |
Some combinations of the above three properties cannot be simultaneously satisfied by any mechanism:
Both the PS and the CEEI rules compute a matrix of expected assignments, i.e., the marginal probabilities with which each agent receives each object. However, since the final allocation must be a matching, one must find a decomposition of this matrix into a lottery on matchings.
In the classic setting, in which m=n, this can be done using the Birkhoff algorithm. It can decompose any n-by-n matrix of agent-object probabilities into a convex combination of O(n2) permutation matrices, each of which represents a matching. However, the decomposition is not unique, and some decompositions may be better than others.
Budish, Che, Kojima and Milgrom [1] generalize Birkhoff's algorithm to arbitrary m and n. They also allow to add constraints on the assignments, under a maximal set of conditions on the set of constraints. They also present a decomposition method that minimizes the variance in the utility experienced by the agents between the different matchings.
Demeulemeester, Goossens, Hermans and Leus [8] present a polynomial-time decomposition algorithm that maximizes the worst-case number of agents who receive an object. Their algorithm guarantees that the worst-case number of agents equals the expected number of agents rounded down, which is the best possible. They present another decomposition algorithm that maximizes the worst-case number of assigned agents while guaranteeing that all matchings in the decomposition be ex-post PE; the second algorithm can be used only for fractional assignments outputted by PS, but not those corresponding to RP. For RP, it is only possible to attain a 1/2-factor approximation to the optimal worst-case number of assigned agents. For general fractional assignments, maximizing the worst-case number of assigned agents subject to ex-post PE is NP-hard. They also present a column generation framework that can be used to optimize other worst-case criteria.
Hosseini, Larson and Cohen [6] compare RP to PS in various settings. They show that:
Tao and Cole [9] study the existence of PE and EF random allocations when the utilities are non-linear (can have complements).
Yilmaz [10] studies the random assignment problem where agents have endowments.
Shen, Wang, Zhu, Fain and Munagala [11] study the random assignment problem when agents have priorities (agents with higher priorities should get their preferred goods before agents with lower priorities), but the priorities are uncertain.
Duddy [12] studies egalitarian random assignment.
{{
cite book}}
: CS1 maint: multiple names: authors list (
link)
{{
cite journal}}
: CS1 maint: multiple names: authors list (
link)