Lesson 6 - Probability Distributions


Random Variable  - A variable whose values are determined by chance.

Probability distribution -  Consists of the values a random variable can assume and their corresponding probabilities


Probability Distributions - 2 requirements

(1.)  The sum of the probabilities of all the events in the sample space must equal 1.

(2.)  The probability of each event in the sample space must be between 
0 and 1 [inclusive].


Mean, Variance, and Standard Deviation:

From Chapter 3:

Mean:    m= (S X)/N

Here this formula can’t be used.  For example, how would you get the mean number of spots that would show up when you rolled a particular die?  If you rolled it 100 times (N = 100) and added them up, you’d get an approximate answer, but to get an exact answer N would have to be infinite!


So how do we find the mean of a probability distribution?

To find the mean of any probability distribution, you must multiply each possible outcome in the sample space by its corresponding probability of occurring, then sum the products!

Formula for the Mean:

The mean of a probability distribution is given by:

m= X1 .P(X1) + X2 .P(X2) + X3 .P(X3) +....+Xn .P(Xn)
or m= S X.P(X)

where X1 , etc. are the outcomes and P(X1), etc., are their corresponding probabilities
Now that we have the mean of a probability distribution, how can we determine the spread of the distribution?  From Chapter 3 we were given formulas for the variance and standard deviation:
s2 = S(X - m)2/N 

s = SQUARE ROOT{S(X - m)2/N} 

But again, to get an exact answer, N would have to be infinite.  Therefore we introduce the following formulas:


Formula for the Variance:

s2 = S[(X - m)2.P(X)] 

But this formula can be tedious, subtracting the mean from each entry, so an algebraically equivalent formula can be derived that is much simpler to calculate:

s2 = S[X2 .P(X)] - m2 

Formula for the Standard Deviation:
s2 = SQUARE ROOT{S[X2 .P(X)] - m2}>


The expected value of a discrete random variable of a probability distribution is the theoretical average of the variable (aka the mean!)

 E(X) = m = SX.P(X)

The expected value is really a measure of the average of the losses in a game of chance.

If the expected value of a game of chance is zero, that game is said to be fair.
If the expected value is negative, the game favors the house
If the expected value is positive, the game favors you (as if this ever happens)


The Binomial Distribution

Many probability problems have only two possible outcomes, or can be reduced to two.

Flip a coin?     (heads or tails)
Husker playoff game?   (win or lose)
Test question?    (right or wrong)

Situations like these are referred to as binomial experiments, and will satisfy the following four requirements:

1.  Each trial can have only two outcomes, or outcomes that can be reduced to two outcomes.  These can be thought of as either success or failure .

2.  There must be a fixed number of trials

3.  The outcomes of each trial must be independent

4.  The probability of success must remain the same for each trial

A binomial experiment and its results yield the binomial distribution.  This leads us to the formula:

  P(X) = {n!/(n-X)!X!}pXqn-X

 P(X)  Probability of X successes
 n The number of trials
 p The mathematical probability of success
 q The mathematical probability of failure   (or, 1 - p)

How does this formula work?

If you look at the first part of the formula (the part in the { }), you may notice that this is exactly the same formula we used when doing combinations!  This gives us the number of ways to get X successes from n trials.  Now, however, we have to figure in the mathematical probabilities (p) for these X successes (remember requirement 4) and the mathematical probabilities for failures (n-X).  This yields the formula above!


Computing these probabilities can be pretty boring, but thankfully tables have been compiled for selected values of n and p. (See Table 2, Page 597)

Mean, Variance, and Standard Deviation for the Binomial Distribution

The mean, variance, and standard deviation for the binomial distribution can be computed quite easily from the number of trials (n), the mathematical probability of success (p) and the mathematical probability of failure (q):

Mean: m = np 

Variance:  s2 = npq 

Standard Deviation:  s = SQUARE ROOT{npq}

Other Types of Distributions

Multinomial Distribution

If each trial in an experiment has more than two outcomes, then we must use the multinomial distribution.  This distribution is discussed in the Schaums Outlines: Statistics, Chapter 7 (see syllabus).

The same general requirements apply here as did in the Binomial distribution (except for the restriction of two outcomes only):

1.  There must be a fixed number of trials

2.  The outcomes of each trial must be independent

3.  The probability of success must remain the same for each trial

Note:  The Multinomial Distribution is a general distribution (three or more outcomes possible); therefore the Binomial Distribution is really just a special case of the multinomial distribution when only two outcomes are possible.

The Poisson Distribution

This probability distribution is useful when n is large, p is small, and the independent variables occur over a period of time.  It can also be used when a density of items is distributed over a given area or volume.  The formula is given by:

 P(X,l) = e-l.lX/X!

Again, the computation of this equation will drive nearly anyone over the brink if done enough times, so our author has thoughtfully included a set of tables for selected n and ?.
This distribution is discussed in the Schaums Outlines: Statistics, Chapter 7 (see syllabus).


Read Chapter 5, sections 5.1 through 5.4. Please go to Lesson 7.