[試題] 110-2 郭漢豪 計量經濟學專題 期末考

作者: unmolk (UJ)   2022-06-09 02:12:38
課程名稱︰經濟學與計量經濟學專題
課程性質︰經濟系、所選修
課程教師︰郭漢豪
開課學院:社科院
開課系所︰經濟系
考試日期(年月日)︰111.06.06~111.06.08
考試時限(分鐘):4320
試題 :
部分數學式以TeX語法編寫。
1. Bayesian Point Estimation (30 points)
This question is about estimating functions of parameters by Bayesian methods.
The random variable under consideration is denoted by x. Image that our potent-
ial data will consist of realization(s) of x. The conditional density of condi-
tional likelihood of x is f(x|μ), where μ us a scalar parameter.
The prior density of μ is g(μ). The marginal density (unconditional density)
of x is denoted by
f_G(x) = \int_μ f(x|μ)g(μ)dμ.
The posterior density of μ us denoted by
f(μ|x) = f(x|μ)g(μ)/f_G(x).
The parameter of interest is denoted by λ=h(μ), which is a function of μ. In
our classes, we considered the cases where λ=μ. The Bayes desicion is δ(x).
The loss function for point estimation is L(δ(x),λ)=(δ(x)-λ)^2. The Bayes
risk is defined as in the reference book that W(δ) = E[E(L|x)]. The Bayes dec-
ision is the minimizer of the Bayes risk.
(i.) Bayes Point Estimate
Prove that the Bayes point estimate of λ\equiv h(μ) is the posterior mean of
λ, E(λ|x). Write your arguments clearly.
(ii.) Normal Random Variable
Suppose x is normally distributed with mean μ and variance σ^2. The variance
σ^2 is a known constant so that we do not need to have a prior distribution of
it. Our parameter of interest is λ=exp{μ/σ}. The prior density of μ is not
specified. Derive the Bayes point estimate of λ, in terms of the mariginal de-
nsity. Note that this question is not example 1.3.6 in the reference book. The
functional form of λ is different.
2. James-Stein Theorem (50 points)
This question is about proving a general version of James-Stein theorem in whi-
ch the random variables and the parameters of interest are independently norma-
lly distributed.
Precisely, we have the assumption (1.32) in Efron (2010, p.6). The observed ra-
ndom variables are z_i, where i=1,...,N. The conditional distribution of z_i is
N(μ_i, σ_0^2). The prior distribution of μ_i is N(M,A).
(i.) Marginal and Posterior Distributions
Prove the results in (1.33) in Efron (2010, p.6). That is, prove that the marg-
inal distribution of z_i is N(M,A+σ_0^2) and that the posterior distribution
of μ_i is N(M+B(z_i-M), Bσ_0^2), where B=A/(A+σ_0^2).
(ii.) Empirical Bayes Estimator
If the piror distribution is known, the Bayesian estimator of μ_i is the post-
erior mean M+B(z_i-M). Suppose the parameters in the prior are unknown. Show
that the James-Stein empirical Bayes estimator is μ_i in (1.35) in Efron (2010
, p.7). Hint: you need to derive an unbiased estimator of B by using the prope-
rties of chi-squared distribution and gamma function.
(iii.) James-Stein Theorem
Prove the James-Stein theorem under this setting. Precisely, prove that (1.26)
in Efron (2010, p.5) holds if N\geq4. Note that you do not need to use the pri-
or distribution when you are proving the James-Stein theorem. The expectations
in (1.26) are conditional on μ_i's.
3. Shrinkage (20 points)
This question is about shrinkage estimation. Suppose y is a p*1 column of rand-
om variables. Its i-th entry is y_i. They have heterogeneous first and second
moments across i. For all i and j, E_μ(y_i)=μ_i, E_μ(y_i-μ_i)^2=σ_i^2,
and E_μ(y_i-μ_i)(y_j-μ_j)=σ_{ij}. Note that σ_i^2 = σ_{ii}. The expectat-
ions are conditional on μ.
The situation is as follows. We have one observation of y_i for each i. Theref-
ore, an obvious unbiased estimator for μ_i is \hat{μ_i}\equiv y_i; that is,
\hat{μ}=y. Obviously, for all i and j, E_μ(\hat{μ_i})=μ_i, E_μ(\hat{μ}-
μ_i)^2 = σ_i^2, and E_μ(\hat{μ}_i-μ_i)(\hat{μ_j}-μ_j)=σ_{ij}.
Our main purpose is to estimate μ_i with small expected total mean squared er-
rors. Thus we consider the following shrinkage estimatorL
\hat{μ_i^s}\equiv\xi_i\hat{μ_i},
where \xi_i is a nonrandom number between 0 and 1. Please derive the conditions
under which we have
E_μMSE(\hat{μ^s},μ) < E_μMSE(\hat{μ},μ).

Links booklink

Contact Us: admin [ a t ] ucptt.com