If a set of observations is distributed as N(u, std. dev.^2), what is the percentage of the observations will differ from the mean by:
a) less than 1 standard deviation?
b) less than 2 standard deviation?
C) less than 3 standard deviation?
How can I prove this?
Pretend that you took an enourmous number of measurements of your variable that was distributed according to N and plotted the results. You would make a graph *of N* itself - exactly the same shape. Call the function f the normalized (unit area) version of this graph.
Then, it should make sense that the probability that a given measurement reveals a result between x and x+dx is f(x)dx. So the total probability to get a result between <i>any<\i> two points is just the integral from the lower point to the upper point of f(x) (the area under your normal distribution between those two points). The trouble is that I don't think you can symbolically integrate your normal distribution. A couple ways you can proceed is to
a) make an approximation (say, Taylor expansion) and work with that
b) Use a computer and find your answer numerically
c) Just look up what the values are in a table and believe them because someone else has already done this work for you!