统计推断答案(打印版) 下载本文

内容发布更新时间 : 2024/5/16 2:33:58星期一 下面是文章的全部内容请认真阅读。

Solutions Manual for

Statistical Inference, Second Edition

Roger L. Berger George Casella University of Florida North Carolina State University Damaris Santana University of Florida

Second Edition 3-13

c. (i) h(x) = 1 I

(x), c(α) = α α > 0, (α) = α, w (α) = α,

α w

Γ(α) 1 2

t1(x) = log(x), t2(x) = ?x.

(ii) A line.

(x), c(θ) = exp(θ4) ? ∞ < θ < ∞, w1 (θ) = θ, d. (i) h(x) = C exp(x4)I {?∞

w2(θ) = θ2, w3(θ) = θ3, t1(x) = ?4x3, t2(x) = 6x2, t3(x) = ?4x. (ii) The curve is a spiral in 3-space.

(iii) A good picture can be generated with the Mathematica statement

x {0

ParametricPlot3D[{t, t^2, t^3}, {t, 0, 1}, ViewPoint -> {1, -2, 2.5}].

1θθ 1 3.35 a. In Exercise 3.34(a) w1(λ) = 2 and for a n(e, e), w1(θ) = θ. 2e λ

b. EX = μ = αβ, then β = μ . Therefore h(x) = 1 I (x),

The pdf ( )f ( o α x {0

αα α c(α) = μ , α > 0, w (α) = α, w (α) = , t121(x) = log(x), t2(x) = ?x. αμ Γ(α)( ) α c. From (b) then (α1, . . . , αn, β1, . . . , βn) = (α1, . . . , αn, α1 , . . . , αn )

μ μ

(x?μ) 1 σ

) is symmetric about μ because, for any ? > 0,

..

1 (μ+?)?μ f =σ

σ

.. 1 (μ??)?μ

1 . ? . 1 . ? . f . f f σ σ

?

==σ σ σ σ

Thus, by Exercise 2.26b, μ is the median.

P (X > xα) = P (σZ + μ > σzα + μ) = P (Z > zα) by Theorem 3.5.6.

First take μ = 0 and σ = 1.

a. The pdf is symmetric about 0, so 0 must be the median. Verifying this, write

? ∞ . . 1 . 1 1 . 1 π ∞1 tan? 1 (z). = dz =P (Z ≥ 0) = ?0 = . π 1+z2 π 0 . 2 2 π0 .∞ 1π π 1 b. P (Z ≥ 1) = 1 tan?1(z).= . ? . = . By symmetry this is also equal to P (Z ≤ ?1).

π

1

π 2 4 4

Writing z = (x ? μ)/σ establishes P (X ≥ μ) = 1 + σ) = 1 . 2 and P (X ≥ μ4

Let X ~ f (x) have mean μ and variance σ2. Let Z = X ?μ . Then

σ

. .

) = 0 EZ = 1 E(X ? μσ

and

. .. . . . 2 1 X ? μ 1 Var(X ? μ) = VarX = σ= 1. VarZ = Var

= σ2 σ σ2 σ2Then compute the pdf of Z, fZ (z) = fx(σz + μ)· σ = σfx(σz + μ) and use fZ (z) as the standard

pdf.

a. This is a special case of Exercise 3.42a.

b. This is a special case of Exercise 3.42b. a. Let θ1 > θ2. Let X1 ~ f (x ? θ1) and X2 ~ f (x ? θ2). Let F (z) be the cdf corresponding to

f (z) and let Z ~ f (z).Then

F (x | θ1) = P (X1 ≤ x) = P (Z + θ1 ≤ x) = P (Z ≤ x ? θ1) = F (x ? θ1)

≤ F (x ? θ2) = P (Z ≤ x ? θ2) = P (Z + θ2 ≤ x) = P (X2 ≤ x) = F (x | θ2).

3-14 Solutions Manual for Statistical Inference

The inequality is because x ? θ2 > x ? θ1, and F is nondecreasing. To get strict inequality for some x, let (a, b] be an interval of length θ1 ? θ2 with P (a < Z ≤ b) = F (b) ? F (a) > 0. Let x = a + θ1. Then

F (x | θ1) = F (x ? θ1) = F (a + θ1 ? θ1) = F (a)

< F (b) = F (a + θ1 ? θ2) = F (x ? θ2) = F (x | θ2).

b. Let σ1 > σ2. Let X1 ~ f (x/σ1) and X2 ~ f (x/σ2). Let F (z) be the cdf corresponding to

f (z) and let Z ~ f (z). Then, for x > 0,

F (x | σ1) = P (X1 ≤ x) = P (σ1Z ≤ x) = P (Z ≤ x/σ1) = F (x/σ1)

≤ F (x/σ2) = P (Z ≤ x/σ2) = P (σ2Z ≤ x) = P (X2 ≤ x) = F (x | σ2).

The inequality is because x/σ2 > x/σ1 (because x > 0 and σ1 > σ2 > 0), and F is nondecreasing. For x ≤ 0, F (x | σ1) = P (X1 ≤ x) = 0 = P (X2 ≤ x) = F (x | σ2). To get strict inequality for some x, let (a, b] be an interval such that a > 0, b/a = σ1/σ2 and P (a < Z ≤ b) = F (b) ? F (a) > 0. Let x = aσ1. Then

F (x | σ1) = F (x/σ1) = F (aσ1/σ1) = F (a)

< F (b) = F (aσ1/σ2) = F (x/σ2) = F (x | σ2).

3.43 a. FY (y|θ) = 1 ? FX ( 1 |θ) y > 0, by Theorem 2.1.3. For θ1 > θ2,

y

. . . . . .

. . FY (y|θ1) = 1 ? FX 1 FY (y|θ2) . ≤ 1 ? FX 1 . = . . θθy 1y 2for all y, since FX (x|θ) is stochastically increasing and if θ1 > θ2, FX (x|θ2) ≤ FX (x|θ1) for all x. Similarly, FY (y|θ1) = 1 ? FX ( 1 |θ1) < 1 ? FX ( 1 |θ2) = FY (y|θ2) for some y, since if θ1 > θ2, FX (x|θ2) < FX (x|θ1) for some x. Thus FY (y|θ) is stochastically decreasing in θ.

1 . Therefore b. FX (x|θ) is stochastically increasing in θ. If θ1 > θ2 and θ1, θ2 > 0 then 1 θ2 > θ1

FX (x| 1 ) ≤ FX (x| 1 ) for all x and FX (x| 1 ) < FX (x| 1 ) for some x. Thus FX (x| 1 ) is

θ1 θ2 θ1 θ2 θ

stochastically decreasing in θ.

The function g(x) = |x| is a nonnegative function. So by Chebychev’s Inequality,

P (|X| ≥ b) ≤ E|X|/b.

Also, P (|X| ≥ b) = P (X2 ≥ b2). Since g(x) = x2 is also nonnegative, again by Chebychev’s Inequality we have

P (|X| ≥ b) = P (X2 ≥ b2) ≤ EX2/b2. For X ~ exponential(1), E|X| = EX = 1 and EX2 = VarX + (EX)2 = 2 . For b = 3, E|X|/b = 1/3 > 2/9 = EX2/b2.

√Thus EX2/b2 is a better bound. But for b = 2,

E|X|/b = 1/ 2 < 1 = EX2/b2. Thus E|X|/b is a better bound.

y y

Second Edition 3-15

a.

MX (t) =

? ∞

?∞

etxfX (x)dx ≥

? ∞

etxfX (x)dx

a

?∞

ta(X ≥ a), ≥ eta fX (x)dx = eP a

where we use the fact that etx is increasing in x for t > 0. b.

MX (t) =

? ∞

?∞

efX (x)dx ≥

a

?∞

tx? a

etxfX (x)dx

?≥ eta

fX (x)dx = etaP (X ≤ a),

?∞

where we use the fact that etx is decreasing in x for t < 0. c. h(t, x) must be nonnegative.

For X ~ uniform(0, 1), μ = 1

2 1 and σ= , thus 2 12

√. .√. 2k 1 k 1 k 1 ? 12 k < √3,

P (|X ? μ| > kσ) = 1 ? P

? √≤ X ≤ + √= 0 k ≥ 3, 2 2 12 12 For X ~ exponential(λ), μ = λ and σ2 = λ2, thus

. P (|X ? μ| > kσ) = 1 ? P (λ ? kλ ≤ X ≤ λ + kλ) = 1 + e?(k+1) ? ek?1 k ≤ 1

e?(k+1) k > 1. From Example 3.6.2, Chebychev’s Inequality gives the bound P (|X ? μ| > kσ) ≤ 1/k2.

Comparison of probabilities

u(0, 1) exp(λ) k

exact exact

.1 .942 .926 .5 .711 .617 1 .423 .135 1.5 .134 .0821

0 0.0651 √

23 0 0.0498 4 0 0.00674 10 0 0.0000167

So we see that Chebychev’s Inequality is quite conservative. 3.47

Chebychev 100

4 1 .44 .33 .25 .0625 .01

P (|Z| > t) = 2P (Z > t) = 2 √2π t

. ? 2 ∞ 1+x2 2

?x /2 = edx. 2 1+π .? x t ∞2 2 1 . ? ∞ x2 2

= e?x /2dx+ e?x /2dx.

π t 1+x2 1+x2 t 1 ?∞

2

e?x/2dx

3-16

Solutions Manual for Statistical Inference

To evaluate the second term, let u = x , dv = xe?x /2dx, v = ?e?x /2, du = 1?x , to obtain

1+x2

(1+x2 )2

2 2 2

? ∞ t ∞ ?. 1 ? x2 x ∞ 2 .2 ?x /2(?e). ? e?x /2dx = (?e?x /2)dx 22221 + x 1 + x (1 + x)

.∞ 2t 2 x t 2

?t 1 ?= ?t /2 e?x /2dx. 222e+ 1 + t (1 + x)

2

x2 t

Therefore,

. ?. ∞ .2 2 t t2 /2 2 1 1 ? x x2 /2

? ? e+ + edxP (Z ≥ t) = 2 2 (1 + 2)2 π 1 + tπ 1 + xx t . . ?. ∞ . 2 t 2 2 2 2 ?x /2?t /2 = edx e+2π 1 + t π t (1 + x2)2 . 2 t t2 /2

?.≥ eπ 1 + t2

For the negative binomial

... .r + x + 1 ? 1 r r + x (1 ? p)P p(1 ? p)x+1 = (X = x). P (X = x + 1) =

x + 1 x + 1

For the hypergeometric

(M ?x)(k?x+x+1)(x+1)

P (X=x)

?M P (X = x + 1) = ( M N

)x+1)(k?x ?1 N ( k) 0 a.

E(g(X)(X ? αβ)) =

? ∞ 0

. if x < k, x < M , x ≥ M ? (N ? k) if x = M ? (N ? k) ? 1

otherwise.

1 α α 1 x/β

dx. ? ?

g(x)(x ? αβ) β x eΓ(α) Let u = g(x), du = g′(x), dv = (x ? αβ)xα?1e?x/β , v = ?βxαe?x/β . Then

.?..∞ 1 α ?x/β α ?x/β . ∞

+ β ′g(x)x e dx . ?g(x)βx e Eg(X)(X ? αβ) = .

Γ(α)βα0 0Assuming g(x) to be differentiable, E|Xg′(X)| < ∞ and limx →∞ g(x)xα e?x/β = 0, the first

term is zero, and the second term is βE(Xg′(X)). b.

.. Γ(α+β) ?.. . . 1

1?X = 1?x β?1α?1 g(x) β ? (α ? 1) (1 ? x) E g(X) β?(α?1) xdx. x x Γ(α)Γ(β) 0

1?x α?1β Let u = g(x) and dv = (β ? (α ? 1) )x(1 ? x). The expectation is x

. ?.. 1 ′α?1Γ(α + β) g(x)xα?1(1 + (1 ? x)g(x)x? x)β1(1 ? x)β?1dx= E((1 ? X)g′(X)),

Γ(α)Γ(β) .

0

assuming the first term is zero and the integral exists.

0