Sunday, October 12, 2014

Confidence Level and Confidence Interval

Being confident make one's self more reassured. Briefly, explanations below are for two sided confidence levels/intervals in order to simplify the idea. Saying "two sided" gives initial impression that there is something like two limits, yeah they are: upper and lower limits where the confidence interval lies in between.

Example: Let's look at the population of a specific mobile phone model. Suppose we are now interested in the 'weight' property. We found that weight property follows a normal distribution with mean value of 120 grams and a standard deviation of 1.4 grams.

Weight ~ Normal (Mu, Sigma) = Normal (120, 1.4)

This understanding means that majority of mobiles tested will weigh very closely to 120 grams. Yes, there should be fluctuations above and below the mean value but surely that still relatively close to mean value.

Suppose a question: do you expect weights like: 121, 119.5, 122.1, 118.9?
Answer: Yes, I surely expect such values.

Another question: do you expect weights like: 158, 67, 140.8, 82.5?
Answer: No! This seems impossible.

For any normal distribution, values goes to + and - infinity. But, as we seen, it has no sense to consider those values away from the mean value as they mostly will not occur (in statistics we say they have extremely low probability).

Here it comes: confidence level means we consider only those values (within the confidence interval) that will mostly be seen. The most popular confidence level is 95% which means we focus on 95% of possible data/values. Far values away from mean value mostly will not be seen, thus we sacrificed by them (5% of data).

The 5% is usually called as alpha which means the percentage of sacrificed data due to being so far of mean value. When having two sided confidence interval, alpha is divided into to halves (0.025 each): upper and lower, which is logical to represent upper and lower far values.

For example above, simple calculations lead to find the 95% confidence interval which is [117.26, 122.74] defining [lower, upper] limits respectively. This result simply means that 95% of mobiles will be found to weigh between 117.26 and 122.74 grams.
Note: the confidence interval depends on the population variance or standard deviation. Larger variance means wider confidence interval and vice versa.
___

Tuesday, September 30, 2014

Conclusions of Hypothesis Testing

A general hypothesis is defined as following (eg a hypothesis on the population mean):

H0: Mu = Mu0
H1: Mu !=  Mu0

OK, apart from we have a two or one sided hypothesis, after performing the checking and statistical tests: our conclusion should be one of the following:
  • Rejecting the null hypothesis (H0).
  • Failing to reject the null hypothesis (H0).
The following statements for conclusions are not accurate:
  • Accepting the null hypothesis (H0).
  • Accepting the alternative hypothesis (H1).



But why?

When we fail to reject H0, it does not mean we accept H0 as a fact because we still could not prove it as a fact. But what happened is that we failed to prove it to be false. This goes like following: we have suspected new factors may affected the population mean, then we have taken all possible evidences and checking, but all checking failed to prove our suspects.

As well, rejecting H0 does not mean accepting H1 as a fact. What happens in this case is we prove, statistically, that H0 is false but not necessary H1 is true fact. Simply: our evidences and checks for the mean proved that it has changed, but we still have no guarantee that it changed into H1 region or this was due to different reasons/factors.
___

Saturday, September 27, 2014

Null and Alternative Hypotheses

Fine... Constructing a statistical hypothesis is mainly to define what's called the null and the alternative hypotheses. Mostly in academic life, students are given the hypothesis to test. But in research or real experiment, constructing the hypotheses correctly is such vital step toward inference or statistical decision.

Let's focus on hypotheses for population mean, for simplicity goals...

Null hypothesis is mainly our initial information or primary belief about the population. Let's consider the production of 500 ml bottles. The 500 ml is the mean capacity of bottles capacity. Since this is the information we know from previous knowledge, it will be our null hypothesis.

We write:
H0: Mu = 500

Note: null hypothesis always has equality sign!



Test for a hypothesis is usually done when we are worried that some factors affected the population, or population has changed for any possible reasons.
OK, so the null hypothesis is the information we know primarily before the suspected change. So, here come the alternative hypothesis to be defined. The alternative hypothesis is usually the region where we suspect or worry that population has changed into.

For bottles capacity example: the change in 500 ml bottle capacity (for any reason) is a bad issue to encounter. For production: it's bad that our bottles be either less than or greater than 500 ml. Lower capacity means unmatched regulations and higher means extra content added.

This is called two sided hypothesis because either change up/down is undesired. Thus, we define our alternative hypothesis as:


H1: Mu != 500 (!= here means not equal)

The other type of hypothesis is to be one sided, when we interested only in (> or <) in alternative hypothesis. In such cases, we suspect/only worried that some factors changed our population toward one direction (up only, down only).
____

Wednesday, September 24, 2014

Understanding the distribution of sample mean (x_bar)

Cool, say now we have a huge population with characteristics (Mu, Sigma^2). When doing a study by sampling, we take a random sample (size n items) and then perform the study on the sample and conclude results back for the population.

From Central Limit Theorem, we know that the sample mean will always follow a normal distribution apart from what the population distribution is, such that:

x_bar ~ N (Mu, Sigma^2/n)
or say:
Expected (x_bar) = Mu
Variance (x_bar) = Sigma^2/n


Well, let's see a simple illustrating example: Suppose we have a population with mean Mu=100.
Now, we have taken a sample, and computed the sample mean, x_bar. We mostly will have x_bar near 100 but not exactly 100. OK, let take another 9 separate samples... suppose these results:

First sample --> x_bar = 99.8
Second sample --> x_bar = 100.1
..
..
..
10th sample --> x_bar = 100.3

What we see that the sample mean is usually close to real population mean, that is the meaning of the expected value of x_bar will be Mu.

Regarding the variance of sample mean (x_bar), variance will always decrease as sample size increase (sample variance=Sigma^2/n) which is natural behavior. We may think of this as the larger sample size we use, we tend to have more precise values for population mean.
When sample size goes to infinity (theoretically), the x_bar variance will be zero. The reason here is that the sample will be exactly the same as the population (all items). Thus, sample mean will give the real exact value for the population mean. There will be no variability in the sample mean because the it fully represents the population mean.
___


Tuesday, September 23, 2014

The Fact and the Hypothesis

A good fact to submit is that we can't easily know the exact truth values/parameters of a population. Mostly, population parameters also change slightly by time and/or affected by different surrounding factors.


Example: a production line for the 500 ml bottles is assumed to produce a population of bottles such that mean value of bottles capacity is exactly 500 ml.

Nice, but what happens in realty?

In realty, several factors will mostly affect the production: human factors, machine factors, environment temperature...etc. Also, each new bottle will contribute in the population mean value. This means a continuous slight change, either up or down, of the mean capacity.

Here comes the hypothesis!

As you see, the ground truth value for population mean is difficult to be exactly determined. However, we have general assumptions/expectations.
OK, constructing a hypothesis should always be driven by our initial knowledge and expectations about the population.
Testing the hypothesis is statistical checking methods to judge these beliefs. Test results should conclude/push more beliefs into either:
  • Failure to say the parameter (eg mean value) has changed. Example: the 500 ml bottles capacity should be considered stable/no change at 500 ml. This conclusion is known as (failing to reject the null hypothesis).
  • Rejecting this initial assumption (we call it later the null hypothesis). This means that some factors affected the production and the mean value has changed to different values (up or down). Then, further monitoring/improvements should be decided to solve the issue. This conclusion is known as (rejecting the null hypothesis).
___

Monday, September 22, 2014

Standard Normal Distribution, what does Z mean?

You mostly know: the Standard Normal Distribution is the special case of Normal Distribution, given that:

Mean: Mu=0.0
Variance: Sigma^2=1.0

Cool, as shown below: the family of normal distributions mainly vary in their mean value and/or their variance.



The standard one plays the role of being the reference distribution. Well, we can convert any normal random variable to corresponding interpretations in the standard form. Hence, we simplify different computations only using standard normal distribution.

OK, let's assume X is a normal distribution with mean Mu and variance Sigma^2. We can convert to a standard normally distributed random variable by following:

Z=(X-Mu)/Sigma

Here, we got Z as the standard normal distribution. OK, but what does this mean?
  • Any point in X (with Mu, Sigma) can be dealt exactly as the converted point in Z with mean Mu=0.0 and Sigma=1.0.
  • The numerator means: how much is the distance or difference between X and the mean Mu. The division by Sigma means: how many Sigmas is that difference? Totally: Z means how many Sigmas the distance between X and Mu is. This information is sufficient in further computations using only the standard normal distribution.
___

Sunday, September 21, 2014

Normal Probability Distribution

Also called Gaussian distribution. OK, many things in this world tends, and should do, to be normally distributed.
Any distribution is a representation of how the information or data is distributed. We mainly look for its central tendency (mean) and variability (variance). That's why the normal distribution is usually written as:

N ~ (Mu, Sigma^2)



For example: the weight of most adult (who still youth) people will normally be centered around some values. Yes, you right there is a diversity: some are slim and some are obese.

We may expect the average weight for people (example: ages 20 to 30) to be between 70 to 74 kg. OK, let's consider it as 72 (this is the mean value).

Let x represents the weight of a random person. Thus,

Expected Value [x] = mean [x] = Mu = 72 kg

If we have a sample, we can compute the variance (sigma^2) to indicate variability. But we may here think as following:

Variance = Sigma^2 = Expected Value [(x-Mu)^2]
Standard Deviation = Sigma = square root [variance]

Got it? The variance is just the squared expected (average) difference between values of x and its mean Mu.

Assume that the weights could vary (in average) +4 or -4 kgs from the mean value. Thus, we have

Sigma approx= 4
Variance approx= 16

We may conclude, the probability distribution of youth people weight:

weight = x ~ N (72, 16)

Note: this is just an illustrative example where real information may be different depending on location or other factors.

Facts for any normally distributed data:
  • Within 1 sigma distance/difference from the mean value (to left and right), there exist about 68% of data.
  • Within 2 sigmas distance/difference from the mean value (to left and right), there exist about 95% of data.
___