# Axial Tomography by filtered least squares

#### The Solution

The classical solution to the equations

H **x** = **b** + **ε** (1a)

which minimizes the sum of squares of errors e^{T}e (least squares solution) is

H^{T}H **x** = H^{T}**b** (2)

H^{T}H is the normal or Hessian matrix, which from its definition is square symmetric. If H^{T}H is non-singular it can be inverted to give the solution

**x** = (H^{T}H)^{-1} H^{T} **b** (3)

However, in our case **b** is shorter than **x** (we do not have enough observations to determine the density **x**). Because of the form of H, the vector H^{T}b is longer than **b**. Normally, in least squares there is a reduction of dimensionality on pre-multiplying by H. However, in the present case since the number of unknown exceeds the number of observations the dimensionality of the normal matrix is actually increased. Equation 2 is ill determined and cannot be solved by normal methods.

We proceed by expressing H^{T}H in diagonal form. Since H^{T}H is square and positive semi-definite one can find a transformation

H^{T}H = V^{T}ΛV (4)

where Λ is a diagonal matrix of the eigenvalues λ_{i} (the eigenvalues of H^{T}H). V is a matrix of which the columns are the orthonormal eigenvectors of H^{T}H. Premutiplication by V rotates into a representation where the eigenvectors become the principle axes of the problem. Substituting (4) in (3) we have

V^{T}ΛV**x** = H^{T}**b** (5) or

ΛV**x**= VH^{T}**b** (6)

We are now in a representation where the principle axes of the Hessian or normal matrix are the base and all equations have been separated. Now we may solve (6) element by element by dividing by λ_{i}.

(V**x**)_{i} = 1/λ_{i} (V H^{T}**b**)_{i} (7)

We can simply stop solving for elements of V**x** when λ gets small compared with the errors in the observations. Hence we determine some (but not all) values of V**x**. The rest we set to zero. We call this truncated vector (V**x**)’. Rotating (transforming) back gives

**x** = V^{T} (V**x**)’ (8)

that is the required solution.