Safety Belt Regression
Penalized estimation methods will be considered (Ridge estimation). If there is a model including parameters and an estimation function, e.g., the least squares function or likelihood function, so called penalized estimators can be obtained. This means that the estimation function is modified by adding some "fitting" term. However, in this presentation of the subject, restrictions are put on the parameters instead of the estimation function which in turn also will lead to a penalized estimation function. Indeed the two different approaches are similar but the estimators differ. In some way, from a likelihood point of view, it is more natural to put restrictions on the parameters in a model than on the estimation function. The approach is based on convex optimization theory. In particular matrix derivatives will be utilized when showing convexity of various functions.
Area: CS1 - Algebraic methods in Statistics and Probability (Elvira Di Nardo)
Keywords: Ridge regression; multivariate linear models, penalized estimation
Please Login in order to download this file