In mathematics, the Brascamp–Lieb inequality is either of two inequalities. The first is a result in geometry concerning integrable functions on n- dimensional Euclidean space . It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb.
Fix natural numbers m and n. For 1 ≤ i ≤ m, let ni ∈ N and let ci > 0 so that
Choose non-negative, integrable functions
Then the following inequality holds:
where D is given by
Another way to state this is that the constant D is what one would obtain by restricting attention to the case in which each is a centered Gaussian function, namely . [1]
Consider a probability density function . This probability density function is said to be a log-concave measure if the function is convex. Such probability density functions have tails which decay exponentially fast, so most of the probability mass resides in a small region around the mode of . The Brascamp–Lieb inequality gives another characterization of the compactness of by bounding the mean of any statistic .
Formally, let be any derivable function. The Brascamp–Lieb inequality reads:
where H is the Hessian and is the Nabla symbol. [2]
The inequality is generalized in 2008 [3] to account for both continuous and discrete cases, and for all linear maps, with precise estimates on the constant.
Definition: the Brascamp-Lieb datum (BL datum)
For any with , define
Now define the Brascamp-Lieb constant for the BL datum:
Theorem — (BCCT, 2007)
is finite iff , and for all subspace of ,
is reached by gaussians:
Setup:
With this setup, we have (Theorem 2.4, [4] Theorem 3.12 [5])
Theorem — If there exists some such that
Then for all ,
Note that the constant is not always tight.
Given BL datum , the conditions for are
Thus, the subset of that satisfies the above two conditions is a closed convex polytope defined by linear inequalities. This is the BL polytope.
Note that while there are infinitely many possible choices of subspace of , there are only finitely many possible equations of , so the subset is a closed convex polytope.
Similarly we can define the BL polytope for the discrete case.
The geometric Brascamp–Lieb inequality, first derived in 1976, [6] is a special case of the general inequality. It was used by Keith Ball, in 1989, to provide upper bounds for volumes of central sections of cubes. [7]
For i = 1, ..., m, let ci > 0 and let ui ∈ Sn−1 be a unit vector; suppose that ci and ui satisfy
for all x in Rn. Let fi ∈ L1(R; [0, +∞]) for each i = 1, ..., m. Then
The geometric Brascamp–Lieb inequality follows from the Brascamp–Lieb inequality as stated above by taking ni = 1 and Bi(x) = x · ui. Then, for zi ∈ R,
It follows that D = 1 in this case.
Take ni = n, Bi = id, the
identity map on , replacing fi by f1/ci
i, and let ci = 1 / pi for 1 ≤ i ≤ m. Then
and the log-concavity of the determinant of a positive definite matrix implies that D = 1. This yields Hölder's inequality in :
The Brascamp–Lieb inequality is an extension of the Poincaré inequality which only concerns Gaussian probability distributions. [8]
The Brascamp–Lieb inequality is also related to the Cramér–Rao bound. [8] While Brascamp–Lieb is an upper-bound, the Cramér–Rao bound lower-bounds the variance of . The Cramér–Rao bound states
which is very similar to the Brascamp–Lieb inequality in the alternative form shown above.
In mathematics, the Brascamp–Lieb inequality is either of two inequalities. The first is a result in geometry concerning integrable functions on n- dimensional Euclidean space . It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb.
Fix natural numbers m and n. For 1 ≤ i ≤ m, let ni ∈ N and let ci > 0 so that
Choose non-negative, integrable functions
Then the following inequality holds:
where D is given by
Another way to state this is that the constant D is what one would obtain by restricting attention to the case in which each is a centered Gaussian function, namely . [1]
Consider a probability density function . This probability density function is said to be a log-concave measure if the function is convex. Such probability density functions have tails which decay exponentially fast, so most of the probability mass resides in a small region around the mode of . The Brascamp–Lieb inequality gives another characterization of the compactness of by bounding the mean of any statistic .
Formally, let be any derivable function. The Brascamp–Lieb inequality reads:
where H is the Hessian and is the Nabla symbol. [2]
The inequality is generalized in 2008 [3] to account for both continuous and discrete cases, and for all linear maps, with precise estimates on the constant.
Definition: the Brascamp-Lieb datum (BL datum)
For any with , define
Now define the Brascamp-Lieb constant for the BL datum:
Theorem — (BCCT, 2007)
is finite iff , and for all subspace of ,
is reached by gaussians:
Setup:
With this setup, we have (Theorem 2.4, [4] Theorem 3.12 [5])
Theorem — If there exists some such that
Then for all ,
Note that the constant is not always tight.
Given BL datum , the conditions for are
Thus, the subset of that satisfies the above two conditions is a closed convex polytope defined by linear inequalities. This is the BL polytope.
Note that while there are infinitely many possible choices of subspace of , there are only finitely many possible equations of , so the subset is a closed convex polytope.
Similarly we can define the BL polytope for the discrete case.
The geometric Brascamp–Lieb inequality, first derived in 1976, [6] is a special case of the general inequality. It was used by Keith Ball, in 1989, to provide upper bounds for volumes of central sections of cubes. [7]
For i = 1, ..., m, let ci > 0 and let ui ∈ Sn−1 be a unit vector; suppose that ci and ui satisfy
for all x in Rn. Let fi ∈ L1(R; [0, +∞]) for each i = 1, ..., m. Then
The geometric Brascamp–Lieb inequality follows from the Brascamp–Lieb inequality as stated above by taking ni = 1 and Bi(x) = x · ui. Then, for zi ∈ R,
It follows that D = 1 in this case.
Take ni = n, Bi = id, the
identity map on , replacing fi by f1/ci
i, and let ci = 1 / pi for 1 ≤ i ≤ m. Then
and the log-concavity of the determinant of a positive definite matrix implies that D = 1. This yields Hölder's inequality in :
The Brascamp–Lieb inequality is an extension of the Poincaré inequality which only concerns Gaussian probability distributions. [8]
The Brascamp–Lieb inequality is also related to the Cramér–Rao bound. [8] While Brascamp–Lieb is an upper-bound, the Cramér–Rao bound lower-bounds the variance of . The Cramér–Rao bound states
which is very similar to the Brascamp–Lieb inequality in the alternative form shown above.