probabilistic machine learning for civil engineers 13
diagonal of
A
contr ol the transformations along the
x
0
1
and
x
0
2
axes,
and the nondiagonal terms control the t ran sf or mat i on dependency
between both axes, (s ee, for example, ﬁ gu re 2.6).
The determinant of a square mat r ix
A
measures how much the
transformation contracts or expands the space:
• det(A) = 1: preserves the space/volume
• det
(
A
) = 0: collapses the spac e/volume along a subset of dimen
sions, for example, 2D space ! 1D space (see ﬁgu re 2.7)
In the examples presented in ﬁgure 2.5a–c, the determinant quan
tiﬁes how much the area/volume is changed in the transformed
space; for the circle, it corresponds to the change of area caused by
the transformation. As shown in ﬁgure 2. 5a, if
A
=
I
, the transfor
mation has no e↵ect so
det
(
A
) = 1. For a square matrix [
A
]
n⇥n
,
det(A):R
n⇥n
! R.
2.4.2 Eigen Decomposition
Linear transformations operate on several dimensions, such as in
the case presented in ﬁgure 2.6 where the transformation introduces
dependency between variables. Eigen decomposition enables ﬁnding
a linear transformation that removes the dependency while preserv
ing the area/volume. A square matrix [
A
]
n⇥n
can be decomposed
in eigenvectors
{⌫
1
, ··· , ⌫
n
}
and eigenvalues
{
1
, ··· ,
n
}
.Inits
matrix form,
A = Vdiag()V
1
,
where
V =[⌫
1
··· ⌫
n
]
=[
1
···
n
]

.
Figure 2.6 presents the eigen decompos it i on of the transformation
x
0
=
Ax
. Eigenvectors
⌫
1
and
⌫
2
describe the new referential into
which the transformation is indepe nd ently applied to each axis.
Eigenvalues
1
and
2
describe the transformation magnitude along
each eigenvector.
3 2 1 0 1 2 3
3
2
1
0
1
2
3
x
0
= Ax
A =
10.5
0.51
V =[⌫
1
⌫
2
]=
0.71 0.71
0.71 0.71
=[0.51.5]

Figure 2.6: Example of eigen decomposi
tion, A = Vdiag()V
1
.
3 2 1 0 1 2 3
3
2
1
0
1
2
3
x
0
1
x
0
2
A =
⇥
10.99
0.99 1
⇤
, det(A)=0.02
Figure 2.7: Example of a nearly singular
transformation.
A matrix is positive deﬁnite if all eigenvalues
>
0, and a matrix
is positive semideﬁnite (PSD) if all eigenvalues
0. The deter
minant of a matrix corresponds to the pr oduct of its eigenvalues.
Therefore, in the case where one eigenvalue equals zero, it indicates
that two or more dimensions are linearly dependent and have col
lapsed into a single one. The transformat i on matrix is then said
to b e si ngu lar . Figure 2.7 presents an examp l e of a nearly singular
transformation. For a positive semideﬁnite mat ri x
A
and for any