Multiplication and Inverse Matrices
May 21, 2019
Review of Matrix Multiplication
A Row $\times$ A Column
The most basic way of multiplying two matrices. Let’s have
where they are all matricies, and . Then the dimenstion of will be . Let be the element at row and column in matrix . Say we want to find :
A Matrix $\times$ Columns
To build on this, let’s think of multiplying two matricies. Each column in is equal to matrix times each column in
Column 1 in the new matrix is equal to (matrix multiply vector) . To expand the onf of the multiplication:
where are columns of . This tells us that columns of are combinations of columns of
Rows $\times$ A Matrix
Similarly, rows of are combinations of rows of
A Column $\times$ A Row
A column of will be dimension and a row of will be . For example, let’s have
The output matrix is very special. Each column of the output is a multiple of . Each row of the output is a multiple of . And later we will see this is a minimal matrix, where the row space and column space are both just lines. Thus the 4th way of multiplying two matrices are
that is, for example,
Blocks
Inverse(square matrices)
Not all matrices have inverses. For a square matrix that has an inverse,
Note for a non-square matrix the left inverse is not same as the right inverse. A matrix that has an inverse is called invertible or non-singular.
Let’s talk about the singular case first.
There are various way we can say why a matrix has no inverse. For now, The professor thinks the best is to say if there’s a non-zero vector s.t.
here , then the matrix is singular. The straightfoward reasoning is that if there’s an exists, we have a contradiction:
but that’s not true, we have assumed that is not zero.
Gaussian-Jordan(solve 2 euqations at once)
Finding an inverse of a matrix can be viewed as solving systems of equations, in this case 2 systems of equations
and . To solve systems of equations, we use Gaussian-Jordan method. First we augment the matrix we want to find the inverse with ,
then we do row first to get the upper triangular form, this idea is from Gauss. Then with Jordan’s idea, from the upper triangular form we eliminate the terms on the upper triangular porition. on the lefthand side to turn into then will become
Why this work? First we have talked before that row eliminations is equivalent to multiply an elimination matrix on the left of (the matrix we want to find inverse)
And obviously , then