
多参数指数族参数经验贝叶斯估计的收敛速度.doc
9页COMUNICATIONS IN STATISTICS 1996,25(6),1325-1334CONVERGENCE RATES FOR EMPIRICAL BAYES ESTIMATORS OF PARAMETERS IN LINEAR REGRESSION MODELSHengqing Tong Department of Mathematics and PhysicsWuhan University of TechnologyWuhan,Hubei,430070,P.R.ChinaKey Words and Phrases: empirical Bayes ;convergence rate;joint estimation; inverse stretch operation; variance component model. ABSTRACTIn this paper, the convergence rates of the estimators of the regression coefficients and the error variance in a linear model are obtained. The rates can approximate to arbitrarily. The convergence of the estimators of the regression coefficients and the variance components in a variance component model is also investigated. The investigation makes use of the results concerning the convergence rates of the estimators of the parameters in multi-parameter exponential families.1. Introduction Wind (1972) investigated the empirical Bayes () estimators of the regression coefficients in a multivariate linear model. Of course, at that time the convergence rate could not be given. Singh (1977, 1979) obtained the rates of convergence of the estimators in one-parameter exponential families. Tong (1996) studied the convergence rates of the estimators of parameters in multi-parameter exponential families. Now we investigate the convergence rates of the joint estimators of the regression coefficient and the error variance in a linear model, then we investigate the convergence of the estimators of the regression coefficients and the variance components in a variance component model.Let and be i . i . d. samples with density . and its partial derivatives are locally bounded. is the dimension of an orthogonal polynomial space in which the kernel estimators of the density are constructed. The parameter of has a prior distribution .Its Bayes risk and risk were defined by the author (1995). We have proved:THEOREM Suppose ,and (1.1) (1.2) (1.3)Then (1.4)2. Convergence of the kernel estimator of density and its partial derivative in multi-parameter exponential familiesFirst, we consider a linear model with a nomal distribution: (2.1)The columns of the design matrix are full rank. and are unknown. denotes the least square estimation of . The conditional density of is (2.2) Where .Second, the density (2.2) is transformed into a multi-parameter exponential family. Let is a nonsingular matrix. The transformations of parameter are defined as follows: (2.3) (2.4) (2.5)The transformations of sample are: (2.6) (2.7) (2.8)Then, (2.9)The density of is (2.10)Obviously, is a random variable with the distribution of exponential family. Its conditional density is (2.11)Third, we give the prior distribution of parameters and verify the conditions of the Theorem in the introduction. Suppose , (2.12)Where =. Let , (2.13) (2.14)According to the measurable transformations from (2.3) to (2.8), the prior distributions of the parameter of have also been given. Let , then (2.15) (2.16)By using the characteristic function, we have ,, denotes a suitable polynomial. Therefore, (2.17)The condition (1.1) is satisfied. Moreover, (2.18)Let , then (2.19)It is easy to calculate (2.20)Therefore, (2.21)For the derived measure of transformations, (2.22) (2.23)Because the negative exponentials in (2.22) and (2.23),(1.2) and (1.3) are satisfied. Now the number of parameters is , therefore, the rates of convergence of the joint estimators of the regression coefficients and the error variance in the linear model are . (2.24)When approximates to infinite, the rates can approximate to .3. The convergence of the estimators of parameters in variance component modelFirst, we consider a variance component model (3.1)Where is matrix. are matrixes. is a fixed effect vectors. are random effect vectors. Suppose (3.2) All are independent. Let (3.3) 。












