Share

Multilevel Modeling

13. Parameter Estimation

Maximum Likelihood Estimators (MLE) that provide population values that maximize the so-called Likelihood Function (LF) that gives the probability of observing the sample data, given the parameter estimates. MLE, therefore, are parameter estimates that maximize the probability of finding the sample data that we have actually found. The MLE are available using the Newton-Raphson Fisher Scoring, Iterative Generalized Least Squares, or the Expectation Maximization algorithms (Longford, 1993).

Computing the MLE requires an iterative procedure. 

  1. At the beginning, starting values for the various parameter estimates (usually based on the Ordinary Least Squares regression estimates) are generated.
  2. In the next step, the computation procedure improves upon the starting values to produce better estimates producing Generalized Least Squares (GLS).
  3. This step is repeated (iterated) until the changes in the estimates between two successive iterations become very small indicating convergence, with the parameter estimates now being MLE.
Lack of convergence could suggest: a) model mis-specification in the fixed part; b) mis-specification of the variance-covariance structure (either too simple or too complex); and c) small sample sizes at different levels.
Longford, N. (1993) Random coefficient models. Oxford, UK: Clarendon Press.