Here goes Gram Schmidt process :

you have say 3 linearly independent vectors at hand.

first, get equivalent 3 orthogonal vectors. equivalent here means any of this 3 can be expressed as an linear combination of that 3, vise visa.

the first vector would be a trival one from the oringinal 3, this vector can surely be expressed as a linear combination, and the coefficients would be 2 zeros and 1 one.

then find a second vector, this vector should satisfy 2 conditions- a linear combination of the original 3, and orthogonal to the first one.

To Make things easier, only another original vector is added to the combination, which means one coefficient of the linear combination is 0.

The big step is to find the proper coefficients for the second one.

Define <α[sub]1[/sub] ,α[sub]2[/sub], α[sub]3[/sub]> as the original independent but not orthogonal vectors.

the first new vector built is β[sub]1[/sub] =α[sub]1[/sub]

=α[sub]1[/sub] +0α[sub]2[/sub] +0α[sub]3[/sub]

the second one β[sub]2[/sub] = α[sub]1[/sub]+kα[sub]2[/sub], the reason for only 1 indetermined coefficient is that given one orthogonal vector, any vector parellal to it is also orthogonal to the given one, and that 1 coefficients is simplier than 2 coefficients.

( β[sub]1[/sub] , β[sub]2[/sub])

= (α[sub]1[/sub] , α[sub]1[/sub]+kα[sub]2[/sub])

= （α[sub]1[/sub] ，α[sub]1[/sub]） + k（α[sub]1[/sub] ，α[sub]2[/sub]）

=0

just solve k out. Note （α[sub]1[/sub] ，α[sub]1[/sub]）is just a simple number like 2, 46.5, etc.

now add the 3rd.

β[sub]3[/sub] =β[sub]1[/sub] +m β[sub]2[/sub] + n α[sub]3[/sub]

(A beginner or a poor mem person like i may think about alpha combination. we will find out why not quite soon)

This time the 3rd vector should be orthogonal to both of the previous ones

in order that 1-2 orthogonal, 1-3 orthogonal, 2-3 orthogonal.

(β[sub]3[/sub] , β[sub]1[/sub])=0 and (β[sub]3[/sub] , β[sub]2[/sub])=0

the first equation:

(β[sub]3[/sub] , β[sub]1[/sub] )

= (β[sub]1[/sub], β[sub]1[/sub]) +m (β[sub]2[/sub], β[sub]1[/sub]) + n （α[sub]3[/sub], β[sub]1[/sub])

= (β[sub]1[/sub], β[sub]1[/sub]) +**0** + n（α[sub]3[/sub], β[sub]1[/sub])

Here the 0 idendity simplifies the equation make solution determined, that's why to choose as many βs as possible, and only one α at one time.

n will be easily solved, so be m.

]]>I don't think I know how to apply Gram-Schmidt for this one. What I know is how to apply Gram Schmidt for bases eg:

f1=(1,0,0)

f2=(1,1,1)

However when it comes to power functions, I'm stuck.

This is another part of the vector space and subset question.

Thanks for helping.

]]>then we should check whether they are orthogonal one by one, and if they are unit vectors.

However, I find (1,t[sup]2[/sup])= (1[sup]3[/sup]-(-1)[sup]3[sup])/3≠0

to find symbols, see top blue bar

]]>(f,g)=S-1 to 1 f(t)g(t) dt (where S is the integral sign, I don't know how to use the tags yet)

**Explain why the power functions 1,t,t^2, t^3, ..., t^n form a basis of Pn. **

Some of my friends have tried solving this question by using some really complex methods (Cauchy-Schwarz inequality etc) that are 2-3 pages long! I don't agree with them. I think, since 1,t,t^2, t^3,..., t^n is a polynomial, and P represents the vector space of all real polynomials, so, by logic the functions form a basis of Pn, right?

Btw, does anyone know any easy way to row reduce a complex matrix to find eigenvectors ie. :

F4=1/2 (1 1 1 1)

1 i -1 -i

1 -1 1 -1

1 -i -1 i

1/2 times the matrix 4 X 4. I hope you can read it.

Eigenvalues=1,-1, i, 1

After doing eg F4-I, I have to do row reduction to find eigenvectors, and the problem is, it's tedious.

]]>