W regresji liniowej rozwiązanie Maximize Likelihood Estimation (MLE) do szacowania x ma następujące rozwiązanie w postaci zamkniętej (przy założeniu, że A jest macierzą z pełną pozycją kolumny):
x^lin=argminx∥Ax−b∥22=(ATA)−1ATb
This is read as "find the x that minimizes the objective function, ∥Ax−b∥22". The nice thing about representing the linear regression objective function in this way is that we can keep everything in matrix notation and solve for x^lin by hand. As Alex R. mentions, in practice we often don't consider (ATA)−1 directly because it is computationally inefficient and A often does not meet the full rank criteria. Instead, we turn to the Moore-Penrose pseudoinverse. The details of computationally solving for the pseudo-inverse can involve the Cholesky decomposition or the Singular Value Decomposition.
Alternatywnie rozwiązaniem MLE do szacowania współczynników w regresji logistycznej jest:
x^log=argminx∑i=1Ny(i)log(1+e−xTa(i))+(1−y(i))log(1+exTa(i))
where (assuming each sample of data is stored row-wise):
x is a vector represents regression coefficients
a(i) is a vector represents the ith sample/ row in data matrix A
y(i) is a scalar in {0,1}, and the ith label corresponding to the ith sample
N is the number of data samples / number of rows in data matrix A.
Again, this is read as "find the x that minimizes the objective function".
If you wanted to, you could take it a step further and represent x^log in matrix notation as follows:
x^log=argminx⎡⎣⎢⎢1⋮1(1−y(1))⋮(1−y(N))⎤⎦⎥⎥[log(1+e−xTa(1))log(1+exTa(1))......log(1+e−xTa(N))log(1+exTa(N))]
but you don't gain anything from doing this. Logistic regression does not have a closed form solution and does not gain the same benefits as linear regression does by representing it in matrix notation. To solve for x^log estimation techniques such as gradient descent and the Newton-Raphson method are used. Through using some of these techniques (i.e. Newton-Raphson), x^log is approximated and is represented in matrix notation (see link provided by Alex R.).
@joceratops answer focuses on the optimization problem of maximum likelihood for estimation. This is indeed a flexible approach that is amenable to many types of problems. For estimating most models, including linear and logistic regression models, there is another general approach that is based on the method of moments estimation.
The linear regression estimator can also be formulated as the root to the estimating equation:
In this regardβ is seen as the value which retrieves an average residual of 0. It needn't rely on any underlying probability model to have this interpretation. It is, however, interesting to go about deriving the score equations for a normal likelihood, you will see indeed that they take exactly the form displayed above. Maximizing the likelihood of regular exponential family for a linear model (e.g. linear or logistic regression) is equivalent to obtaining solutions to their score equations.
WhereYi has expected value g(Xiβ) . In GLM estimation, g is said to be the inverse of a link function. In normal likelihood equations, g−1 is the identity function, and in logistic regression g−1 is the logit function. A more general approach would be to require 0=∑ni=1Y−g(Xiβ) which allows for model misspecification.
Additionally, it is interesting to note that for regular exponential families,∂g(Xβ)∂β=V(g(Xβ)) which is called a mean-variance relationship. Indeed for logistic regression, the mean variance relationship is such that the mean p=g(Xβ) is related to the variance by var(Yi)=pi(1−pi) . This suggests an interpretation of a model misspecified GLM as being one which gives a 0 average Pearson residual. This further suggests a generalization to allow non-proportional functional mean derivatives and mean-variance relationships.
A generalized estimating equation approach would specify linear models in the following way:
WithV a matrix of variances based on the fitted value (mean) given by g(Xβ) . This approach to estimation allows one to pick a link function and mean variance relationship as with GLMs.
In logistic regressiong would be the inverse logit, and Vii would be given by g(Xiβ)(1−g(Xβ)) . The solutions to this estimating equation, obtained by Newton-Raphson, will yield the β obtained from logistic regression. However a somewhat broader class of models is estimable under a similar framework. For instance, the link function can be taken to be the log of the linear predictor so that the regression coefficients are relative risks and not odds ratios. Which--given the well documented pitfalls of interpreting ORs as RRs--behooves me to ask why anyone fits logistic regression models at all anymore.
źródło