Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ ¹ ² ³ °
 

You are not logged in. #1 20070412 00:08:54
Vector spaces.A vector is a mathematical object with both magnitude and direction. This much should be familiar to you all. Consider the Cartesian plane, with coordinates x and y. Then any vector here can be specified by v = ax + by, where a and b are just numbers that tell us the projection of v on the x and y axes respectively. #2 20070412 03:27:08
Re: Vector spaces.Polynomials are typically a good way to think of vector spaces. You can add or subtract two polynomials and you end up with a polynomial of the same degree. But if you try to multiply them you will (typically) get a polynomial of greater degree. If you try to divide, you may not end up with a polynomial. Also, you can multiply and divide by scalars. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." #3 20070412 04:50:41
Re: Vector spaces.Given a field F, you can consider F^{n} as a dimensionn vector space over F. Then you can certainly multiply your vectorspace elements in the obvious way (i.e. multiplying the corresponding coordinates). I’m not sure what this is going to lead to, though. Last edited by JaneFairfax (20070412 04:57:07) #4 20070412 21:13:12
Re: Vector spaces.Ricky: Yes, I was going to touch on polynomial space in a bit, I wanted to get established first, however, I think I may have to address this first; #5 20070412 21:36:26
Re: Vector spaces.Sorry, I misunderstood you. I thought you wanted to find more vector spaces where “multiplication” of vectors is allowed. Last edited by JaneFairfax (20070412 21:53:10) #6 20070417 02:04:13
Re: Vector spaces.Sorry for the delay in proceeding, folks, I had a slight formatting problem for which I I have found a partial fix  hope you can live with it. But first this; Now, you may be thinking I've taken leave of my senses  how can we have more that 3 Cartesian coordinates? Ah, wait and see. But first this. You may find the equation a bit daunting, but it's not really. Suppose n = 3. All it says is that there is an object called v whose direction and magnitude can be expressed by adding up all the units (the ) that v projects on the coordinates . Or, if you prefer, v has projection along the th coordinate. Now it is a definition of Cartesian coordinates that they are to be perpendicular, right? Then, to return to more familiar territory, the xaxis has zero projection of the yaxis, likewise all the other pairs. This suggests that I can write these axes in vector form, so take the xaxis as an example. x = ax + 0y + 0z, this is the definition of perpendicularity (is this a word?) we will use. So, hands up  what's "a"? Yeah, well this alerts us to a problem, which I'll state briefly before quitting for today. You all said "I know, a = 1" right? But an axis, by definition, extends to infinity, or at least it does if we so choose. So, think on this; an element of a Cartesian coordinate system can be expressed in vector form, but not with any real sense of meaning. The reason is obvious, of course: Cartesian (or any other) coordinates are not "real", in the sense that they are just artificial constructions, they don't really exist, but we have done enough to see a way way around this. More later, if you want. #7 20070418 04:40:38
Re: Vector spaces.So, it seems there are no problems your end. Good, where was I? Ah yes, but first this: I said that a vector space comprises the set V of objects v, w together with a scalar field F and is correctly written as V(F). Everybody, but everybody abuses the notation and writes V for the vector space; we shall do this here, OK with you? We also agreed that, when using Cartesian coordinates (or some abstract ndimensional extension of them) we require them to be mutually perpendicular. Specifically we want the " projection" of each on each (i ≠ j) to be zero. We'll see what we really mean by this in a minute. So now, I'm afraid we need a coupla definitions. The inner product (or scalar product, or dot product) of v, w ∈ V is often written v · w = a (a scalar). So what is meant by v · w? Let's see this, in longhand; let and . Then . Now this looks highly scary, right? So let me introduce you to a guy who will be a good friend in what follows, the Kroenecker delta. This is defined as if i = j, otherwise = 0. So we find that So, to summarise, . Now can you see why it's sometimes called the scalar product? Any volunteers? Phew, I'm whacked, typesetting here is such hard work! Any questions yet? #8 20070418 05:20:18
Re: Vector spaces.Just a concern; when you say that you are assuming an orthonormal basis in cartesian coordinates (where the metric tensor is just the identity, ). Perhaps you should make this assumption clearer. This is certainly fine in an introduction, as the inner product is given as usually when first introduced (at that stage usually you are not worried about vector spaces which use other inner products). Overall I just feel that the inner product section is a bit shaky. For example, you never tell the reader that even though you substitute it into your formula out of nowhere.#9 20070418 06:23:11
Re: Vector spaces.
You're right, I am, I was kinda glossing over that at this stage. Maybe it was confusing? You are jumping a little ahead when you equate good ol' v · w with (v,w), but yes, I was getting to that (it's a purely notation convention). But I really don't think, in the present context, you should have both raised and lowered indices on the Kroenecker delta; I don't think that makes any sense (but see below).
Ah, well, again you are jumping ahead of me! your notation usually refers to the product of the components of a dual vector with its corresponding vector, as I'm sure you know. I was coming to the dual space, in due course.
Well, I never did claim that equality, both my indices were lowered, which I think is the correct way to express it (I can explain in tensor language if you insist). But, yeah, OK, let's tidy it up, one or the other of us, but if you do want to have a pop (feel free!) make sure we are both using the same notation #10 20070419 00:06:10
Re: Vector spaces.So, lemme see if I can make the inner product easier. So we can think of our vector projecting units on the ith axis. But we can ask the same question of two vectors: what is the "shadow" that, say, v casts on w? Obviously this is at a maximum when v and w are parallel, and at a minimum when they are perpendicular (remember this, it will become important soon enough. OK, so in the equation I wrote for the inner product what's going on? specifically, why did I switch indices on b, and what happened to the coordinates, the x's? Well, it makes no sense to ask, in effect, what is v's projection in the x direction on w's projection on the y axis. So we only use like pairs of coordinates and components, in which case we don't need to write down coordinates. In fact it's better if we don't, since we want to emphasize the fact that the inner product is by definition scalar. This what the Kroenecker delta did for us. It is the exact mathematical equivalent of setting all the diagonal entries of a square matrix to 1, and all offdiagonal entries to 0! On last thing. Zhylliolom wrote the inner product as something a bit like this <v,w>. I prefer (v, w), a symbol reserved in general for an ordered pair. There is a very good reason for this preference of mine, which we will come to in due course. But as always, I've rambled on too long! Last edited by ben (20070419 08:13:29) 