As we usually do, when we continue the discussion of a topic through more than one column, we continue the numbering of equations
from where we left off.

At this point, we have already solved the main problem, but a nagging question comes to mind. In computing the sum-squared-error
(SSE), we used the following expression for the squared error (1):
and found that it gave only the trivial solution. At that time, we had not yet introduced the use of Lagrange multipliers,
so the natural question that comes to mind at this point is, what would have happened if we had used the simpler expression
for the SSE, and applied the same constraint that the sum of the squares of the coefficients had to equal a constant (unity)
to that?

Let's make the same simplification of notation we did previously and follow through with this development. Simplifying equation
19 by using only three variables worth of data, redefining the error matrix to represent the variance accounted for, and using
only the variance terms of equation 19, the expression for SSE becomes
(where A, from the simplified form of equation 19, represents the total accounted-for difference between the principal component
and the date, and SSA is the sum-squared accounted-for difference, corresponding to equation 28).

Then the corresponding expression for SSA becomes:
Note the absence of cross-terms (that is, *X*
_{
i
}
*X*
_{
j
}). This development of the equations does not create the covariance terms that were present in the equations leading to the
final solution. We will see shortly that this has significant consequences.

We now introduce the constraint *L*
_{1,1}
^{
2
} + *L*
_{2,1}
^{
2
} + *L*
_{3,1}
^{
2
} = 1 with the Lagrange multiplier, as we did before:

Again we take derivatives with respect to the various *L*
_{
i
}:

Setting these derivatives equal to zero: