# Lecture 3 Notes: MIT Linear Algebra (18.06) [Multiplication and Inverse Matrices]

Important Points and Concepts

– When are 2 matrices multipliable? And what would the result look like?

– What are the ways we can multiply 2 matrices?

– What is an inverse matrix? And how do we find it?

### Multiplication

When are 2 matrices multipliable? And what would the result look like?

Let A be an m x n matrix and B an n x m matrix. Can we multiply A and B?

An easy way to tell if 2 matrices are multipliable is to check if the number of columns of the matrix A is equal to the number of rows of matrix B. If they are, we can multiply them. The output will have the have the same number of rows as A and the same number of columns as B.

(m x n)(n x m) = m x m

In this case, AB will result in an m x m matrix, let’s call it C.

Let’s do another quick example: A(m x  n)B(n x p) = C(???)

Matrix A is an m x n matrix and B is an n x p matrix. First, can we multiply them? And if so, what would the output look like in terms of rows and columns?

We can see that A has n rows, B has n columns, which means that we can multiply these 2 matrices. C would have the number of rows of A (m) and the number of columns in B (p). Therefore, C would be an m x p matrix.

What are the ways we can multiply 2 matrices?

Ways to multiply

$\Bigg[ \hphantom{-} \begin{matrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{matrix} \hphantom{-}\Bigg] \Bigg[ \hphantom{-} \begin{matrix} 7 & 8 \\ 9 & 10 \\ 11 & 12 \end{matrix} \hphantom{-}\Bigg] \hphantom{-} = \hphantom{-} ???$

These matrices are multipliable, the result is an 2 x 3 matrix.

Let’s call the first matrix P, the second matrix Q and the resulting matrix Z.

What are the ways we can multiply 2 matrices?

Dot product (standard way)

To find the top left number of Z, we take the first row of P [1, 2, 3] and the first column of Q [7, 9, 11]. We perform an operation called the dot product, which looks like this: (1, 2, 3) * (7, 9, 11).

The operation produces (1 * 7) + (2 * 9) + (3 * 11) and the top left number of Z results in 58.

For the top right value, we perform the same operation with the same row [1, 2, 3] but a different column. Instead we use the second column of Q [8, 10, 12] since we are trying the find the (1, 2) value of Z. Notice how change which row and column we are multiplying based on the row and column of Z.

The operation produces this: (1 * 8) + (2 * 10) + (3 * 12) and the top left number of Z is 64.

I won’t solve it all the way, but you get the idea. For the bottom left of Z (2, 1), we’d be taking the dot product of the second row of P and the first column of the Q.

Column Way (columns of Z are combinations of columns of P)

How do we multiply a matrix by a column?

Let’s find the first column of Z: P times first column of Q.

And that’s it, we repeat this process for each column of Q to produce each column of Z.

Row way (rows of  Z are combinations of rows of Q)

How do we multiply a row by a matrix?

Let’s find the first column of Z: the first row of P times Q.

Very similar to the column way. We repeat this process for each row of P.

Columns x Rows

Columns of P times a row of Q = (2 x 1) x (1 x 3). We get a 2 x 3 matrix.

A quick example:

$\Bigg[ \hphantom{-} \begin{matrix} 2 \\ 3 \\ 4 \end{matrix} \hphantom{-}\Bigg] \Bigg[ \hphantom{-} \begin{matrix} 1 & 6 \end{matrix} \hphantom{-}\Bigg] \hphantom{-} = \hphantom{-} \Bigg[ \hphantom{-} \begin{matrix} 2 & 12 \\ 3 & 18 \\ 4 & 24 \end{matrix} \hphantom{-}\Bigg]$

The first row of the result [2 * 1, 2 * 6] is [2, 12]. The second row is [3 * 1, 3 * 6] is [3, 18] and the third row is [4, 24]. The pattern is easy to see.

Block

Let’s say we want to multiply matrices A and B, both are 20 x 20 matrices. The main concept is that we can break up matrices  into blocks. We can break up A into 4 10 x 10 matrices and B into 4 10 x 10 matrices. Simply multiply the top left blocks of A and B, and we get the top left matrix of the resulting matrix (AB).

### Inverses

What is an inverse matrix? And how do we find it?

Let A be a square matrix.

$A^{-1} A = I \\ AA^{-1} = I$

The inverse of A times A is the identity matrix, I. And this works both ways for square matrices only.

A matrix has no inverse if

– The determinant is zero (the left diagonal numbers multiplied together minus the right diagonal multiplied together is zero)

– Both columns lie on the same line (for example if the second column in a 2 x 2 matrix is a multiple of the first)

– you can find a vector x with Ax = 0, (x!=0)

Gauss – Jordan (solve 2 equations at once)

Solve for the inverse.

$\Bigg[ \hphantom{-} \begin{matrix} 1 & 3 \\ 2 & 7 \end{matrix} \hphantom{-}\Bigg] \Bigg[ \hphantom{-} \begin{matrix} a & b \\ c & d \end{matrix} \hphantom{-}\Bigg] \hphantom{-} = \hphantom{-} \Bigg[ \hphantom{-} \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \hphantom{-}\Bigg]$

First, create an augmented matrix that includes the identity.

$\Bigg[ \hphantom{-} \begin{matrix} 1 & 3 & 1 & 0 \\ 2 & 7 & 0 & 1 \end{matrix} \hphantom{-}\Bigg]$

Recreate the identity matrix on the left side of this matrix using elimination and the right side will become the inverse.

$\Bigg[ \hphantom{-} \begin{matrix} 1 & 3 & 1 & 0 \\ 2 & 7 & 0 & 1 \end{matrix} \hphantom{-}\Bigg] \rightarrow \Bigg[ \hphantom{-} \begin{matrix} 1 & 3 & 1 & 0 \\ 0 & 1 & -2 & 1 \end{matrix} \hphantom{-}\Bigg] \rightarrow \Bigg[ \hphantom{-} \begin{matrix} 1 & 0 & 7 & -3 \\ 0 & 1 & -2 & 1 \end{matrix} \hphantom{-}\Bigg]$

The process is elimination the usual way, top down. Once we reach the bottom, we perform elimination bottom up to reproduce the identity matrix.

And you can see, the right side becomes the inverse of our original matrix.

$EA = I \rightarrow E = A^{-1}$

# Lecture 2 Notes: MIT Linear Algebra (18.06) [Elimination with Matrices]

### Elimination with Matrices

Important point: A matrix times a column is a column, a matrix times a row is a row

Given the following equations, solve for x, y, and z using elimination and back substitution:

$\hphantom{3}x + 2y + z = 2 \\ 3x + 8y + z = 12 \\ \hphantom{3x +} 4y + z = 2$

We start by forming our matrix, leaving out the right side of the equations.

$\Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1 \\ 3 && 7 && 1 \\ 0 && 4 && 1 \end{matrix} \hphantom{-}\Bigg]$

### Elimination (success/failure)

Using elimination, our goal is to turn our original matrix into one that looks like this where n represents some arbitrary number:

$\Bigg[ \hphantom{-} \begin{matrix} n && n && n \\ 0 && n && n \\ 0 && 0 && n \end{matrix} \hphantom{-}\Bigg]$

To get to this destination we:
– go row by row
– keep the first row
– determine what number multiplied to the row above when subtracted
by the current row will get us closer our destination matrix

Step by step, here are what the matrices will look like

$\Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1 \\ 3 && 7 && 1 \\ 0 && 4 && 1 \end{matrix} \hphantom{-}\Bigg] \rightarrow \Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1 \\ 0 && 2 && -2 \\ 0 && 4 && 1 \end{matrix} \hphantom{-}\Bigg] \rightarrow \Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1 \\ 0 && 2 && -2 \\ 0 && 0 && 5 \end{matrix} \hphantom{-}\Bigg]$

We keep the first row, so we take a look at the second row. We ask ourselves: What must I do to the first row ([1, 2, 1]) such that when subtracted by the second row ([3, 8, 1]), the first number is a zero? Our aim is to get a row that look like this ([0, n, n], again n being any arbitrary number). The answer is that we must multiply the first row by 3, which will give us [3, 6, 3]. We then subtract this by the second row to give us [0, 2, -2]. We now work with this new matrix when going to the third row.

We repeat this step for our new matrix.

We have to get a row that looks like [0, 0, n]. As we can see, we’re already halfway there since we were given the first 0 to begin with. Same process for the other 0: what must I do to the row above ([0, 2, 2]) such that when subtracted by the third row ([0, 4, 1]) the first two numbers are zero? Similar to the process above, we must multiply the second row by 2, which will give us [0, 4, -4]. We then subtract this by the third row to give us [0, 0, 5]. Now, we have our final matrix.

Pivot numbers

The diagonal line of numbers starting from the top-left (1, 2, 5) are called pivot numbers. An important note is that if any of these numbers were zero to start with, we would have to switch rows around and try again.

Right hand side

Notice we haven’t touched the right side of the equations yet, but that’s an easy step to do afterwards. This is also the way software solves systems of equations.

Let’s take our matrices we solved through and let’s augment them by including the numbers from the right hand side of the equations. These matrices are known as augmented matrices.

$\Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1 & 2\\ 3 && 7 && 1 & 12\\ 0 && 4 && 1 & 2 \end{matrix} \hphantom{-}\Bigg] \rightarrow \Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1 & 2\\ 0 && 2 && -2 & 6\\ 0 && 4 && 1 & 2 \end{matrix} \hphantom{-}\Bigg] \rightarrow \Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1 & 2\\ 0 && 2 && -2 & 6\\ 0 && 0 && 5 & -10 \end{matrix} \hphantom{-}\Bigg]$

You get the new fourth column by following a similar process of taking the multiplier, multiply the number above the row in question and subtracting the current row number by that value.

For example, the number we multiplied the first row in the un-augmented matrices was a 3. We start in the second row and multiply the 2 in the first by 3 to make 6. Subtract the 12 in the second row by the 6 and we get 6, which is our new second row value.

### Back Substitution

Our new equations are:

$\hphantom{3}x + 2y + z = 2 \\ \hphantom{3x +} 2y + -2z = 6 \\ \hphantom{3x + 2y +} 5z = -10$

Very simple from this point, start with the last equation and move up, solving for a variable at a time. The solutions turns out to be: z = -2, y = 1, x = 2.

### Elimination Matrices

Let’s use the first and second matrices of our un-augmented matrices as an example. What matrix do I multiply the first matrix by to get the second matrix? How do I subtract 3x from row 1 from row 2?

$\Bigg[ \hphantom{-} \begin{matrix} 1 && 0 && 0\\ 0 && 1 && 0\\ 0 && 0 && 1 \end{matrix} \hphantom{-}\Bigg] \Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1\\ 3 && 7 && 1\\ 0 && 4 && 1 \end{matrix} \hphantom{-}\Bigg] = \Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1\\ 0 && 2 && -2\\ 0 && 4 && 1 \end{matrix} \hphantom{-}\Bigg]$

We’ll start with the identity matrix which when multiplied by some matrix, gives you back that same matrix. We need to fix this so that we have 3 of row 1 subtracted from row 2. Here is the solution.

$\Bigg[ \hphantom{-} \begin{matrix} 1 && 0 && 0\\ -3 && 1 && 0\\ 0 && 0 && 1 \end{matrix} \hphantom{-}\Bigg] \Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1\\ 3 && 7 && 1\\ 0 && 4 && 1 \end{matrix} \hphantom{-}\Bigg] = \Bigg[ \hphantom{-} \begin{matrix} 1 && 2 && 1\\ 0 && 2 && -2\\ 0 && 4 && 1 \end{matrix} \hphantom{-}\Bigg]$

### Matrix Multiplication

Important point: Matrices are noncommutative, so order matters!

How to switch rows of a matrix

$\Bigg[ \hphantom{-} \begin{matrix} 0 & 1\\ 1 & 0 \end{matrix} \hphantom{-}\Bigg] \Bigg[ \hphantom{-} \begin{matrix} a & b\\ c & d \end{matrix} \hphantom{-}\Bigg] = \Bigg[ \hphantom{-} \begin{matrix} c & d\\ a & b \end{matrix} \hphantom{-}\Bigg]$

Inverses

How to undo elimination: Simply switch a sign to get back to the identity matrix

$\Bigg[ \hphantom{-} \begin{matrix} 1 && 0 && 0\\ 3 && 1 && 0\\ 0 && 0 && 1 \end{matrix} \hphantom{-}\Bigg] \Bigg[ \hphantom{-} \begin{matrix} 1 && 0 && 0\\ -3 && 1 && 0\\ 0 && 0 && 1 \end{matrix} \hphantom{-}\Bigg] = \Bigg[ \hphantom{-} \begin{matrix} 1 && 0 && 0\\ 0 && 1 && 0\\ 0 && 0 && 1 \end{matrix} \hphantom{-}\Bigg]$