Let the matrix **B** be obtained from the matrix **A** by applying the row operation e. Then:

These results hold true for the corresponding column operations.

]]>Let **A** be an n x n matrix. Then, A is *lower triangular* if a_ij = 0 whenever i < j. Similarly, A is *upper triangular* if a_ij = 0 whenever i > j. The determinant of a triangular matrix is given by the product of the diagonal entries, i.e.

Let **A** be an n x n matrix with entries (a_ij). Then, the *determinant* of **A** is given by

where S_n is the group of permutations of {1, 2, ... , n}.

]]>where i=1, j=2 (the 1st row, 2nd column)

The power of (-1) takes care of the symbol, hence,

IS CORRECT!]]>

should that be

?]]>If

Let

the system of equations can be solved by this method.

]]>Consider the third order determinant

The value of the determinant is unaltered if its rows and columns are interchanged.

If two adjacent rows or columns are interchanged, the sign of the determinant changes, but its numerical value remains unaltered.

Similarly,

If two rows or columns of a determinant are identical, the value of the determinant is zero.

If all the elements of one row or one column be multiplied by a non-zero constant k, then the value of the determinant is multiplied by k.

If each element of a row or column is expressed as the sum of two numbers, then the determinant can be expressed as a sum of two determinants of the same order.

A fetrminant is unaltered in value, by adding to all the lements of any column or any row the same multiple of the corresponding elements of any number of other columns or rows.

In any determinant, if the elements of any row or column are multiplied by the cofactors of the corresponsing elements of any other row or column, the sum of the products would be equal to zero.

]]>

System of Linear equations can be solved with the help of Matrices.

Consider the linear equations

Let

Matrix X is the solution of the given simultaneous equations.

]]>1. If A and B are two matrices of the same order,

A+B=B+A (Commutative law of addition)

2. If A,B, and C are matrices of the same order,

(A+B)+C=A+(B+C) (Associative law of Addition)

3. If A and B are matrices of the same order and k is a scalar,

k(A+B)=kA+kB

4. A+0=A

5. A +(-A)=0

6. A+B=A+C implies B+C.

7. If A, B, and C are matrices of order mxn, nxp, pxq respectively,

A(BC) = (AB)C (Associative law of multiplication).

8. If A, B, and C are matrices oforder mxn, nxp, pxq respectively, then

A(B+C) = AB + AC (Distributive law)

9. AI=IA=0

10. A0=0

11. If n is a positive integer,

12. A (adj A) = (adj A) A = (determinant A)I

Adj(AB) = (adjB)(adjA)

determinant (adj A) = (determinant A)-¹

13.

]]>If A is a square matrix

The adjoint of A is defined to be the transpose of the cofactor matrix of A and is denoted by adj.A.

**Inverse of a Matrix**

The Inverse or Reciprocal of a non-Singular Matrix A is denoted by

.It can be shown that

If A and B are two matrices such that

AB=BA=1, then

**Symmetric and Skew-symmetric Matrices**

A square matrix is said be symmetric if the (I,j)th element of the matrix is equal to the (j,i)th element.

for all values of i and j.

A square matrix is said to be skew symmetric if the (i,j)the element is equal to the negative of the (j,i)th element.

for all values of i and j.

Eamples:-

Symmetric matrices

Skew-symmetric Matrix

**Conjugate of a Matrix**

The Matrix obtained from any given Matrix A by replacing its elements by the corresponding conujugate complex numbers is called the conjugate of A and denoted by

If

then

**Hermitian and skew-Hermitian Matrices**

A square matrix

is said to be Hermitian if the (i,j)th element of A is equal to the conjugate complex of the (j,i)th element of A.

A square matrix

is said to be skew-Hermitian if the (i,j)th element of A is equal to the negative of the conjugate complexof the (j,i)th element.

Example :- Hermitian Matrix

Example:- Skew-Hermitian Matrix

one application is now you can spot immediately why an identity matrix multiplied by a matrix equals the matrix itself through row selection, and why a matrix multiplied by an identity matrix equals the matrix itself through column selection.

]]>**Scalar Product**

A vector or a matrix can be multiplied by a scalar k entirely.

For Example

**Vector Product**

two vectors containing same amount of entries can be multiplied. some people call this "dot product"

or equivalently, using matrix rule of product expression:

notice the second vector is placed verticle in matrix rule of product expression.

We can think the product as each entry of the former vector(a,b and c) , is **scalar multiplied** by corresponding entry of the latter vector, and then the 3 product ad, be and cf are added up and give the final result. So does the reverse.(this concept will be applied below next *)

**Extension to Matrix Vector Product**

We can add another vector [a[sub]2[/sub] b[sub]2[/sub] c[sub]2[/sub]]under the vector [a b c] and let it do the **SAME** multiplication to [d e f], and do the SAME summation and the result is placed under previous one for vector[a[sub]2[/sub] b[sub]2[/sub] c[sub]2[/sub]] has been placed under[a b c]:

Recall the concept of scaler product-sum analyze, we will notice both a and a[sub]2[/sub] are multiplied by scalar d, both b and b[sub]2[/sub] are multiplied by e, as well as both c and c[sub]2[/sub] are multiplied by f, and then corresponding product are added. If we define

then the matrix product can be expressed as

where col stands for column, A is now a matrix and no longer a vector .

we can add many rows

[a[sub]3[/sub] b[sub]3[/sub] c[sub]3[/sub]]... [a[sub]m[/sub] b[sub]m[/sub] c[sub]m[/sub]] to the matrix, but

still holds.

or

This is called a matrix multiplied by a vector on the right is equivalent to get its columns(also vectors) combined by entries of the right vector.It's another way to perceive matrice product.

Similarly, we have this formula about row combination*

**Extension to Matrice Product**

what if a matrix multiplied by a matrix? we can either seperate right matrix(B) into columns of scalars and get a Row of Column combinations or seperate left matrix(A) into rows of scalars and get a Column of Row combinations.

Matrices can be added or subtracted only if they are of the same order.

Let

then,

Similarly,

**Multiplication of matrices**

Two matrices A and B can be multiplied if and only if the number of columns of matrix A is equal to the number of rows of matrix B.

If

, then

**Transpose of a matrix**

For any given matrix A, the matrix whose rows are columns of matrix A and whose columns are rows of matrix A is called the transpose of matrix A. It is represented by

or A'.If matrix A is a m x n matrix, then A' is n x m matrix.

If

thenIf asquare matrix and its transpose are equal, then the matrices are

**Properties of Transpose**

1. If A and B are two matrices of the same order, then

2. If A and B are compatible for multiplication, then

These results can be extended to n matrices.

**Determinant of a matrix**

Consider the matrix

The determinant of the matrix is given by

|A|=a11(a22*a33-a32*a23)-a12(a21*a33-a31*a23)+a13(a21*a32-a31*a22)

Let

be the matrix obtained by deleting the ith row and jth column of matrix A. The determinantis called a minor of the matrix A.

The scalar

Two matrices A and B of the same order are equal when their corresponding elements are equal.

is equal to

implies a=1, b=2, c=3, d=4, e=5, f=6, g=7, h=8, and i=9.

**Singular and nonsingular matrices**

A square matrix is a singular matrix if |A|=0. If |A|≠ 0, then the matrix is nonsingular.

|A| represents the value of the determinant of the matrix.

**Scalar multiplication of a matrix by a number**

If

then

where k is a scalar or a number.

**Negative of a matrix**

The negative of a matrix is obtained by multiplying all the elements of the matrix by -1.

The negative of

is

]]>