Singular Value Decomposition

June 16, 2019

Singular Value Decomposition(SVD)

This is the final and best decomposition for a matrix, and it does not require to be square. are orthogonal and is diagonal. If is square, then its svd is just . And if is symmetric then . But we’re after an orthogonal matrix, in , is not orthogonal, so no good. (According to the book, eigenvalues are unstable but singular values are) What are and exactly? Imagine, A matrix :

can be thought of being constructed in two ways. One is has n column vectors, the other is has m row vectors (neglect the transpose for now). These columns or rows do not have to independent. all have size . So to make it prettier, we can get rid of the transpose, then they all have size . What happened if we try ? has size . In fact, is transforming a vector in the row space to the column space. Why? Say has rank and has rank . Then column space is and row space is in but only has dimension 2, is in such space. And is size which lies exactly in the column space. This can be generalized to any rank and arbitrary sizes. Let has size , then

is a linear combination of the column space in (1). Now first let be a orthonormal basis for the row space. This is saying

but right now has to be a unit vector. When we do , we are just transforming a bunch of vectors into the column space. Let

be an orthogonal basis for the column space. , which has the same size as . Since we’ve known each column of is in the column space and is an orthornomal basis for the column space, we have

where is a diagonal matrix with a bunch of constants. Now to make the SVD complete, we can extend to , and to , (in fact we do not quite care about this):

You probably notice the size of doesn’t quite match what we have on (3). In fact you are right. What we are adding up after , is a basis for null space not left null space. And the diagonal constants are all zero because

So our final SVD:

Line 2 to 3 because ’s columns is orthonormal. And conventionally, people put values in descending order in , the greatest comes first.

How to find them

So how to find this matrix? Note that

This is similar to right? Therefore, are the eigenvalues for , and ’s columns are the eigenvectors of . So is composed of the square root of 's eigenvalues. Similarly is the eigenvectors of but has the same eigenvalues. Let’s do an example:

Then

And its eigenvectors and values are

We normalize the eigenvectors because we need an orthonormal basis. Then we have and .

We can find by doing again or we can write out to guess, but anyway:

Singular Value Decomposition - June 16, 2019 - Ruizhen Mai