The range of a nonatomic vector measure in R 3

Using the Neyman-Pearson lemma we give a method for finding the range of three nonatomic probability measures. Next we use this result to construct an explicit form of this range in the case, where the densities corresponding to these measures satisfy an additional assumption.


Introduction
Let (X, B) be a measurable space and let µ 1 , . . ., µ n be nonatomic probability measures on (X, B) possessing densities f 1 , . . ., f n with respect to some measure ν.The subset R(µ 1 , . . ., µ n ) of R n , defined by R(µ 1 , . . ., µ n ) = { (µ 1 (A), . . .µ n (A)) : A ∈ B } , is called the range of the vector measure (or a zonoid).The Lyapounov Convexity Theorem (see Lyapounov [1]) states that the set R(µ 1 , . . ., µ n ) is convex and compact in R n .The problem is how to find this set for given nonatomic probability measures µ 1 , . . ., µ n .In the case of n = 2 this question was answered by Legut and Wilczyński [2].For the sake of completeness we recall here this result.Then we give some partial results concerning the structure of the range of the vector measure in R 3 .
Without loss of generality, we suppose throughout that (X, B) = (R, B R ), where B R is the Borel σ-field on the real line R.Moreover, we assume that the measures µ 1 , µ 2 and µ 3 are absolutely continuous with respect to the Lebesgue measure λ and we put f 1 = dµ 1 /dλ, f 2 = dµ 2 /dλ and f 3 = dµ 3 /dλ for the corresponding Lebesgue probability density functions (p.d.f.'s).We denote by X 1 , X 2 and X 3 three random variables, defined on some probability space (Ω, F , P), which are distributed according to f 1 , f 2 and f 3 , respectively.The corresponding cumulative distribution functions (c.d.f.'s) we denote by F 1 , F 2 and F 3 .We define the quantile function F −1 1 (the quasi inverse of F 1 ) by Finally, we use the symbol R( f 1 , f 2 , f 3 ) instead of R(µ 1 , µ 2 , µ 3 ) for the range of a vector measure (µ 1 , µ 2 , µ 3 ), i.e. we put In this section we recall the results of Legut and Wilczyński [2] concerning the set R( f 1 , f 2 ).We will use these facts in inference concerning Lyapounov set in R 3 .
The compactness of the set R( f 1 , f 2 ) implies that for any x ∈ [0, 1] there exists a set Let the function G : [0, 1] → [0, 1] be defined by It is clear that for each x ∈ [0, 1], the point (x, G(x)) lies on the boundary of the set R( To obtain the Lyapounov set R( f 1 , f 2 ) explicitly, it is necessary to determine each of the sets D x , x ∈ [0, 1].To do this we use the Neyman-Pearson lemma (see Lehmann and Romano [3] p. 60).This lemma implies the following corollary.
Corollary 1.For each x ∈ [0, 1], a set D x satisfies (2) if and only if there exists a number k ≥ 0, which depends on x, such that Obviously, D x is uniquely determined (up to sets of Lebesgue measure zero) except on the set on which f 2 (t) = k f 1 (t).On this set, D x can be defined arbitrarily provided that it has µ 1 measure x.
In some cases the function G, and hence the shape of the set R( f 1 , f 2 ), can easily be found.This is illustrated in the proposition below, which follows immediately from (3) (cf.Legut and Wilczyński [2]).
1 be the inverse function of F 1 and let r be the likelihood ratio given by r(x) := f 2 (x) Then the following statements hold: 1.If the ratio r(x) is decreasing in x on (0, 1), then D x = 0, ITM Web of Conferences 20, 03004 (2018) https://doi.org/10.1051/itmconf/20182003004ICM 2018

If the ratio r(x) is increasing in x on
If the ratio r(x) is symmetric about x 0 = 1/2 and decreasing in x on (0, 1/2), then If the ratio r(x) is symmetric about x 0 = 1/2 and increasing in x on (0, 1/2), then . When the likelihood ratio r, defined by (4), satisfies none of the above properties, finding the shape of the set R( f 1 , f 2 ) is much more complicated.However, Legut and Wilczyński [2] proved that for any Lebesgue p.d.f.s . To describe the relationship between f * 2 and the densities f 1 , f 2 we denote by H the survival function of the random variable r(X 1 ), i.e. we put Let H −1 : (0, 1) → [0, ∞) be the quasi inverse of H, defined by Clearly, H −1 is nonnegative, nonincreasing and right continuous function on the interval (0, 1).Moreover, if U is uniformly distributed on (0, 1), then the random variables H −1 (U) and r(X 1 ) have the same distribution (see Shorack [4], p. 111-117).Hence, These useful properties of H −1 , and hence of f * 2 , were used in Legut and Wilczyński [2] to prove the following theorem.
Theorem 2. Let f 1 , f 2 be fixed probability Lebesgue densities on (R, B R ) and let f * 1 and f * 2 be the corresponding Lebesgue densities defined above.Then R( where the function G : ITM Web of Conferences 20, 03004 (2018) https://doi.org/10.1051/itmconf/20182003004ICM 2018 Proof.(The rough sketch) The proof is based on the following two facts.First, r(X 1 ) where I(A) represents the indicator function of the event A. Second, the function f * 2 is nonincreasing and right continuouson on the interval (0, 1), which implies that for each y ≥ 0 there exists Remark 1.The function G defined by ( 7) is the c.d.f. of a random variable which is continuously spread over the interval (0, 1) with the density f * 2 and takes value 0 with the probability We start this section by introducing some auxiliary results which are analogous to those in Theorem 1 and Corollary 1.

Auxiliary results
The compactness of the set R( f 1 , f 2 , f 3 ) implies that for any point (x, y) from R( f 1 , f 2 ) there exists a set D x,y ∈ B, such that Let the function G 3 : R( f 1 , f 2 ) → [0, 1] be defined by . This implies the following result, the proof of which is trivial.
ITM Web of Conferences 20, 03004 (2018) https://doi.org/10.1051/itmconf/20182003004ICM 2018 The last theorem provides an explicit description of the set R( f 1 , f 2 ) only in the case when the function G 3 can be given explicitly.The next result, which follows from the Neyman-Pearson lemma, is helpful in finding the structure each of the sets D x,y , (x, y) ∈ R( f 1 , f 2 ).Clearly, knowledge of this structures implies knowledge of the function G 3 .
Corollary 2. Suppose that (x, y) is an inner point of the Lyapounov set R( f 1 , f 2 , f 3 ).Then a set D x,y satisfies (8) if and only if there exist constants k 1 , k 2 , which depend on x, y, such that λ almost everywhere It follows from Theorem 2 that in the problem of finding the shape of R( f 1 , f 2 ), one can assume without loss of generality that f 1 is the uniform density on (0, 1) (see Theorem 2).The next lemma shows that an analogous result for the case of n = 3.
To prove that R( f 1 , f 2 , f 3 ) = R( f1 , f2 , f3 ) we fix two real numbers k 1 , k 2 and define sets From Corrolary 2 we deduce that the points A f 1 (x) dx, A f 2 (x) dx, A f 3 (x) dx and A f1 (u) du, A f2 (u) du, A f3 (u) du lie on the upper boundaries of the sets R( f 1 , f 2 , f 3 ) and R( f1 , f2 , f3 ), respectively.Moreover, for each k = 1, 2, 3, the following equality holds The same reasoning applies when > in A and A is replaced by ≥.This completes the proof.

Main result
The arguments given in Section 3.1 show that to find the shape of R( f 1 , f 2 , f 3 ) we can use throughout the following assumption: Assumption A1.The functions f 1 , f 2 , f 3 are Lebesgue p.d.f.'s on (0, 1) and f 1 is the uniform density on (0, 1), i.e. f 1 (x) = I (0,1) (x), x ∈ R.
Unfortunately, in these general settings the problem of finding the shape of R( f 1 , f 2 , f 3 ) seems to be quite a tricky affair.Therefore, to simplify this problem we impose later two assumptions on the densities f 1 , f 2 , f 3 .
Theorem 2 implies that to find the Lyapounov set in R 2 we can assume that one of the densities corresponds to the uniform distribution on (0, 1) while the second is nonincreasing function on (0, 1).The following theorem shows that an analogous result holds for the Lyapounov set in R 3 .To state this result we adopt the notation used in Section 2. Let U be a random variable with the uniform distribution over the interval (0, 1).We write H 2 for the survival function of the random variable f 2 (U) and we denote by H −1 2 the quasi inverse of H 2 (cf.( 5) and ( 6)) .Moreover, we define the function f * 2 : (0, 1) → [0, ∞) by , for all u ∈ (0, 1).
Using the same arguments as those used in inference concerning H −1 (see Section 2) we obtain that f * 2 is the Lebesgue p.d.f. on (0, 1) and that f * 2 (U) With this notation we have the following result.Theorem 4. Let f 1 , f 2 , f 3 be any Lebesgue p.d.f.'s on (0, 1) such that f 1 is the uniform density on (0, 1).If there exists a measurable function φ such that Let k 1 , k 2 be any real numbers and let A and A * be two subsets of (0, 1), given by Corollary 2 implies that to prove the theorem it suffices to show that for any This equality holds, because Y D = Y * and hence, for each j = 1, 2, 3, where B = {(y 1 , y 2 , y 3 ) ∈ R 3 :  11) is satisfied when the density f 2 is one-to-one on (0, 1) or, more generally, when the random variable f 3 (U) is measurable with respect to the σ-field generated by the random variable f 2 (U).
We use the last theorem to prove the main result of the paper.
Theorem 5.Under the assumptions of Theorem 4, let φ be a strictly convex function on [0, ∞).Then the upper boundary of R( f 1 , f 2 , f 3 ) is the parametric surface of the form: where F * 2 and F * 3 are the distribution functions corresponding to the densities f * 2 and f * 3 , respectively.
, it suffices to show that the surface r is the upper boundary . For this purpose we first show that this boundary is generated by sets from the family A * = {(0, s) ∪ (t, 1) : 0 ≤ s ≤ t ≤ 1}.
Since f * 2 is nonincreasing, there exists s ∈ (0, 1) such that A * = (0, s) almost everywhere λ.Using the same argument, one can show that if A * f * 1 (u) du, A * f * 2 (u) du belongs to the lower boundary of the set R( f * 1 , f * 2 ) then A * = (t, 1) for some t ∈ (0, 1).Suppose now, that . Then, by Corollary 2, there exist real numbers k 1 , k 2 such that Since the function ψ We consider only the first case, because the other two can be treated by analogy.So ) almost everywhere λ.
To illustrate more explicitly how Theorem 5 can be used in deriving the shape of R( f 1 , f 2 , f 3 ), we consider the following example.
This implies that the upper boundary of R( f 1 , f 2 , f 3 ) is the parametric surface, given by: To determine this boundary explicitly, we need to find the function G 3 , defined by (9).Let (x, y) be any point from R( f 1 , f 2 ).Since (x, y, G 3 (x, y)) lies on this boundary, we must have Solving this system of equations we get Using this solution in (10) we obtain the explicit form of R( f 1 , f 2 , f 3 ).
du be any point which belongs to the upper boundary of the setR( f * 1 , f * 2 , f * 3 ).Suppose first that A * f * 1 (u) du, A * f * 2 (u) du lies on the upper boundary of R( f * 1 , f * 2 ).Then, by Theorem 2, there exist a real number k such that