Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1 2007-04-11 02:08:54

ben
Member
Registered: 2006-07-12
Posts: 106

Vector spaces.

A vector is a mathematical object with both magnitude and direction. This much should be familiar to you all. Consider the Cartesian plane, with coordinates x and y. Then any vector here can be specified by v = ax + by, where a and b are just numbers that tell us the projection of v on the x and y axes respectively.

Now it is natural in this example to think of v as being an arrow, or "directed line segment" to give it its proper name. But as the dimension of the containing space increases, this becomes an unimaginable notion. And in any case, for most purposes this is a far too restricted idea, as I will show you later.

So, I'm going to give an abstract formulation that will include the arrows in a plane, but will also accommodate more exotic vectors. So, here's a

Definition. A vector space V(F) is a set V of abstract objects called vectors, together with an associated scalar field F such that:

V(F) forms an abelian group;
for all v in V(F), all a in F, there is another vector w = av in V(F);
for any a, b in F, and v, w in V(F) the following axioms hold;
a(bv) = (ab)v;
1v = v;
a(v + w) = av + aw;
(a + b)v = av + bv.

Now I've used a coupla terms you may not be familiar with. Don't worry about the abelian group bit - probably you'd be appalled by the notion of a non-abelian group, just assume that here the usual rules of arithmetic apply.

About the field, again don't worry too much - just think of it as being a set of numbers, again with the usual rules of manipulation (there is a technical definition, of course). But I do want to say a bit about fields.

The field F associated to the set V ( one says the vector space V(F) is defined over F) may be real of complex. All the theorems in vector space theory are formulated under under the assumption that F is complex, for the following very good reason:

Suppose I write some complex number as z = a + bi, where i is defined by i² = -1. a is called the real part, and bi the imaginary part. When b = 0, z = a + 0i = a, and z is real. It follows that the reals are a subset of the complexes, therefore anything that is true for a complex field is of necessity true of a real field.  However, there are notions attached to the manipulation of complex numbers (like conjugation, for example) which don't apply to real numbers, or rather are merely redundant, not wrong. Formulating theorems under the above assumption saves having two sets of theorems for real and complex spaces.

You will note, in the definition of a vector space above, that no mention is made of multiplication of vectors - how do yo multiply arrows? It turns out there is a sort of way, well two really, but that will have to wait for another day. Meantime, any questions, I'll be happy to answer.

Offline

#2 2007-04-11 05:27:08

Ricky
Moderator
Registered: 2005-12-04
Posts: 3,791

Re: Vector spaces.

Polynomials are typically a good way to think of vector spaces.  You can add or subtract two polynomials and you end up with a polynomial of the same degree.  But if you try to multiply them you will (typically) get a polynomial of greater degree.  If you try to divide, you may not end up with a polynomial.  Also, you can multiply and divide by scalars.


"In the real world, this would be a problem.  But in mathematics, we can just define a place where this problem doesn't exist.  So we'll go ahead and do that now..."

Offline

#3 2007-04-11 06:50:41

JaneFairfax
Member
Registered: 2007-02-23
Posts: 6,868

Re: Vector spaces.

Given a field F, you can consider F[sup]n[/sup] as a dimension-n vector space over F. Then you can certainly multiply your vector-space elements in the obvious way (i.e. multiplying the corresponding co-ordinates). I’m not sure what this is going to lead to, though.

Or maybe n×n matrices whose entries are elements in F? You can also multiply matrices (and this might be more interesting).

Last edited by JaneFairfax (2007-04-11 06:57:07)

Offline

#4 2007-04-11 23:13:12

ben
Member
Registered: 2006-07-12
Posts: 106

Re: Vector spaces.

Ricky: Yes, I was going to touch on polynomial space in a bit, I wanted to get established first, however, I think I may have to address this first;

Jane: I don't at all like the way you expressed this, so I guess I'll have to take a bit of a detour.

Consider R, the real numbers. We grew up with them and their familiar properties, and some of us became quite adapt at manipulating them in the usual ways. But we never stopped to think of what we really meant by R, and what the properties really represent. Abstract algebra, which is what we are doing here, tries to tease apart those particular properties that R shares with other mathematical entities.

It turns out that R is a ring, is a field, is a group, is a vector space, is a topological space......., but there are groups that are not fields, vector spaces that are not groups and so on. So it pays to be careful when talking about R, by which I mean one should specify whether one is thinking of R as a field or as a vector space.

So, what you claim is not true, in the sense that the field R is a field, not a vector space. What is true, however, is that R, Q, C etc have the properties of both fields and vector spaces. So for example, if I say that -5 is a vector in R, I mean there is an object in R that has magnitude +5 in the -ve direction. Multiplication should be thought of as applying a directionless scalar, say 2, which is an object in R viewed as a scalar field.

I hope this is clear. Sorry to ramble on, but it's an important lesson - R, C and Q are dangerous beasts to use as examples of, well anything, really, because they are at best atypical, and at worst very confusing.

Offline

#5 2007-04-11 23:36:26

JaneFairfax
Member
Registered: 2007-02-23
Posts: 6,868

Re: Vector spaces.

Sorry, I misunderstood you. I thought you wanted to find more vector spaces where “multiplication” of vectors is allowed.

But then I didn’t see anything disastrously wrong with something being a group, a field, a totally ordered set, a vector space and a metric space all rolled into one.

Last edited by JaneFairfax (2007-04-11 23:53:10)

Offline

#6 2007-04-16 04:04:13

ben
Member
Registered: 2006-07-12
Posts: 106

Re: Vector spaces.

Sorry for the delay in proceeding, folks, I had a slight formatting problem for which I I have found a partial fix - hope you can live with it. But first this;

When chatting to jane, I mentioned that the real line R is a vector space, and that we can regard some element -5 as a vector with magnitude +5 in the -ve direction. But of course there is an infinity of vector in R with magnitude +5 in the -ve direction. I should have pointed out that, in any vector space, vectors with the same magnitude and direction are regarded as equivalent, regardless of their limit points (they in fact form an equivalence class, which we can talk about, but it's not really relevant here)

We agreed that a vector in the {x,y} plane can be written

, where
are scalar. We can extend this to as many Cartesian dimensions as we choose. Let's write
, i = 1,2,...,n. (note that the superscripts here are not powers, they are simply labels.

Now, you may be thinking I've taken leave of my senses - how can we have more that 3 Cartesian coordinates? Ah, wait and see. But first this. You may find the equation

a bit daunting, but it's not really. Suppose n = 3. All it says is that there is an object called v whose direction and magnitude can be expressed by adding up all the units (the
) that v projects on the coordinates
. Or, if you prefer, v has projection 
along the 
th coordinate.

Now it is a definition of Cartesian coordinates that they are to be perpendicular, right? Then, to return to more familiar territory, the x-axis has zero projection of the y-axis, likewise all the other pairs. This suggests that I can write these axes in vector form, so take the x-axis as an example. x = ax + 0y + 0z, this is the definition of perpendicularity (is this a word?) we will use. So, hands up - what's "a"? Yeah, well this alerts us to a problem, which I'll state briefly before quitting for today.

You all said "I know, a = 1" right? But an axis, by definition, extends to infinity, or at least it does if we so choose. So, think on this; an element of a Cartesian coordinate system can be expressed in vector form, but not with any real sense of meaning. The reason is obvious, of course: Cartesian (or any other) coordinates are not "real", in the sense that they are just artificial constructions, they don't really exist, but we have done enough to see a way way around this.

More later, if you want.

Offline

#7 2007-04-17 06:40:38

ben
Member
Registered: 2006-07-12
Posts: 106

Re: Vector spaces.

So, it seems there are no problems your end. Good, where was I? Ah yes, but first this: I said that a vector space comprises the set V of objects v, w together with a scalar field F and is correctly written as V(F). Everybody, but everybody abuses the notation and writes V for the vector space; we shall do this here, OK with you?

Good. We have a vector v in some vector space V which we wrote as

, where the
are scalar and the
are coordinates.

We also agreed that, when using Cartesian coordinates (or some abstract n-dimensional extension of them) we require them to be mutually perpendicular. Specifically we want the " projection" of each

on each
(i ≠ j) to be zero. We'll see what we really mean by this in a minute.

So now, I'm afraid we need a coupla definitions.

The inner product (or scalar product, or dot product) of v, w ∈ V is often written v · w = a (a scalar). So what is meant by v · w? Let's see this, in longhand;

let

and
. Then
.

Now this looks highly scary, right? So let me introduce you to a guy who will be a good friend in what follows, the Kroenecker delta. This is defined as

if i = j, otherwise = 0. 

So we find that

So, to summarise,

. Now can you see why it's sometimes called the scalar product? Any volunteers?

Phew, I'm whacked, typesetting here is such hard work!

Any questions yet?

Offline

#8 2007-04-17 07:20:18

Zhylliolom
Real Member
Registered: 2005-09-05
Posts: 412

Re: Vector spaces.

Just a concern; when you say that

you are assuming an orthonormal basis in cartesian coordinates (where the metric tensor is just the identity,
). Perhaps you should make this assumption clearer. This is certainly fine in an introduction, as the inner product is given as
usually when first introduced (at that stage usually you are not worried about vector spaces which use other inner products). Overall I just feel that the inner product section is a bit shaky. For example, you never tell the reader that
even though you substitute it into your formula out of nowhere.

Offline

#9 2007-04-17 08:23:11

ben
Member
Registered: 2006-07-12
Posts: 106

Re: Vector spaces.

Zhylliolom wrote:

Just a concern; when you say that

you are assuming an orthonormal basis in cartesian coordinates

You're right, I am, I was kinda glossing over that at this stage. Maybe it was confusing?  You are jumping a little ahead when you equate good ol' v · w with (v,w), but yes, I was getting to that (it's a purely notation convention). But I really don't think, in the present context, you should have both raised and lowered indices on the Kroenecker delta; I don't think that

makes any sense (but see below).

This is certainly fine in an introduction, as the inner product is given as

Ah, well, again you are jumping ahead of me! your notation

usually refers to the product of the components of a dual vector with its corresponding vector, as I'm sure you know. I was coming to the dual space, in due course.

Overall I just feel that the inner product section is a bit shaky. For example, you never tell the reader that

even though you substitute it into your formula out of nowhere.

Well, I never did claim that equality, both my indices were lowered, which I think is the correct way to express it (I can explain in tensor language if you insist). But, yeah, OK, let's tidy it up, one or the other of us, but if you do want to have a pop (feel free!) make sure we are both using the same notation

Offline

#10 2007-04-18 02:06:10

ben
Member
Registered: 2006-07-12
Posts: 106

Re: Vector spaces.

So, lemme see if I can make the inner product easier.

First recall the vector equation I wrote down:

. I hope nobody needs reminding this is just shorthand for
. The scalars
are called the components of v. Note that the
do not enter explicitly into the sum; they merely tell us in what direction we want to evaluate our components. (v is a vector, after all!).

So we can think of our vector projecting

units on the ith axis. But we can ask the same question of two vectors: what is the "shadow" that, say, v casts on w? Obviously this is at a maximum when v and w are parallel, and at a minimum when they are perpendicular (remember this, it will become important soon enough.

OK, so in the equation I wrote for the inner product

what's going on? specifically, why did I switch indices on b, and what happened to the coordinates, the x's?

Well, it makes no sense to ask, in effect, what is v's projection in the x direction on w's projection on the y axis. So we only use like pairs of coordinates and components, in which case we don't need to write down coordinates. In fact it's better if we don't, since we want to emphasize the fact that the inner product is by definition scalar. This what the Kroenecker delta did for us. It is the exact mathematical equivalent of setting all the diagonal entries of a square matrix to 1, and all off-diagonal entries to 0!

On last thing. Zhylliolom wrote the inner product as something a bit like this <v,w>. I prefer (v, w), a symbol reserved in general for an ordered pair. There is a very good reason for this preference of mine, which we will come to in due course. But as always, I've rambled on too long!

Last edited by ben (2007-04-18 10:13:29)

Offline

Board footer

Powered by FluxBB