Search
Girls in STEM
About FAWE Elearning
We at FAWE have built this platform to aid learners, trainers and mentors get practical help with content, an interactive platform and tools to power their teaching and learning of STEM subjects
Search
More Girls in STEM
Histoires inspirantes de femmes scientifiques africainesUncategorized
PRODUCTION OF ARTIFICIAL COLOSTRUM TO REDUCE CALF MORTALITY AND INCREASE THEIR PERFORMANCEUncategorized
- Inspiring stories from African women scientistsUncategorized
Addressing gender stereotypes in the classroomGeneral
Gender assumptions that challenge a quality education for girls in UgandaEducation
Strengthening Gender Responsive Pedagogy for STEM in UgandaEducation
RESPECT FOR WOMEN IS PARAMOUNTNetworking
Introduction
The concept of probability plays an important role in all problems of science and every day life that involves an element of uncertainty. Probabilities are defined as relative frequencies, and to be more exact as limits of relative frequencies. The relative frequency is nothing but the proportion of time an event takes place in the long run.
When an experiment is conducted, such as tossing coins, rolling a die, sampling for estimating the proportion of defective units, several outcomes or events occur with certain probabilities. These events or outcomes may be regarded as a variable which takes different values and each value is associated with a probability.
The values of this variable depends on chance or probability. Such a variable is called a random variable. Random variables which take a finite number of values or to be more specific those which do not take all values in any particular range are called discrete random variables.
For example, when 20 coins are tossed, the number of heads obtained is a discrete random variable and it takes values 0,1,…,20. These are finite number of values and in this range, the variable does not take values such as 2.8, 5.7 or any number other than a whole number.
In contrast to discrete variable, a variable is continuous if it can assume all values of a continuous scale. Measurements of time, length and temperature are on a continuous scale and these may be regarded as examples of continuous variables.
A basic difference between these two types of variables is that for a discrete variable, the probability of it taking any particular value is defined. For continuous variable, the probability is defined only for an interval or range.
The frequency distribution of a discrete random variable is graphically represented as a histogram, and the areas of the rectangles are proportional to the class frequencies. In continuous variable, the frequency distribution is represented as a smooth curve.
Frequency distributions are broadly classified under two heads:
Observed frequency distributions are based on observations and experimentation. As distinguished from this type of distribution which is based on actual observation, it is possible to deduce mathematically what the frequency distributions of certain populations should be.
Such distributions as are expected from on the basis of previous experience or theoretical considerations are known as theoretical distributions or probability distributions. Probability distributions consist of mutually exclusive and exhaustive compilation of all random events that can occur for a particular process and the probability of each event’s occurring.
It is a mathematical model that represents the distributions of the universe obtained either from a theoretical population or from the actual world, the
Probability and Sampling Distributions
distribution shows the results we would obtain if we took many probability samples and computed the statistics for each sample. A table listing all possible values that a random variable can take on together with the associated probabilities is called a probability distribution.
The probability distribution of X, where X is the number of spots showing when a six- sided symmetric die is rolled is given below:
The probabilty distribution is the outcome of the different probabilities taken by the function of the random variable X.
Knowledge of the expected behaviour of a phenomenon or the expected frequency distribution is of great help in a large number of problems in practical life. They serve as benchmarks against which to compare observed distributions and act as substitute for actual distributions when the latter are costly to obtain or cannot be obtained at all.
We now introduce a few discrete and continuous probability distributions that have proved particularly useful as models for real-life phenomena. In every case the distribution will be specified by presenting the probability function of the random variable.
Discrete Probability Distributions
Uniform Distribution
A uniform distribution is one for which the probability of occurrence is the same for all values of X. It is sometimes called a rectangular distribution. For example, if a fair die is thrown, the probability of obtaining any one of the six possible outcomes is 1/6. Since all outcomes are equally probable, the distribution is uniform.
Definition: If the random variable X assumes the values x1,x2,…,xk with equal probabilities, then the discrete uniform distribution is given by
Suppose that a plant is selected at random from a plot of 10 plants to record the height. Each plant has the same probability 1/10 of being selected. If it is assumed that the plants have been numbered in some way from 1 to 10, the distribution is uniform with f(x;10) = 1/10 for x=1,…,10.
Binomial distribution is a probability distribution expressing the probability of one set of dichotomous alternatives i.e. success or failure. More precisely, the binomial distribution refers to a sequence of events which posses the following properties:
Consider a sequence of n independent trials. If we are interested in the probability of x successes from n trials, then we get a binomial distribution where x takes the values from 0,1,…,n.
Definition: A random variable X is said to follow a binomial distribution with parameters n and p if its probability function is given by
The probability of success are the successive terms of the binomial expansion (q+p)n. The probable frequencies of the various outcomes in N sets of n trials are N(q+p)n. The frequencies obtained by this expression are known as expected or theoretical frequencies. On the other hand, the frequencies actually obtained by making experiments are called observed frequencies. Generally, there is some difference between the observed and expected frequencies but the difference becomes smaller and smaller as N increases.
Constants of Binomial Distribution
The various constants of the binomial distribution are as follows:
Properties of Binomial Distribution
Importance of Binomial Distribution
The binomial probability distribution is a discrete probability distribution that is useful in describing an enormous variety of real life events. For example, an experimenter wants to know the probability of obtaining diseased trees in a random sample of 10 trees if 10 percent of the trees are diseased. The answer can be obtained from the binomial probability distribution. The binomial distribution can be used to know the distribution of the number of seeds germinated out of a lot of seeds sown.
The incidence of disease in a forest is such that 20% of the trees in the forest have the chance of being infected. What is the probability that out of six trees selected, 4 or more will have the symptoms of the disease?
Solution: The probability of a tree having being infected is
and the probability of not being infected
Hence the probability of 4 or more trees being infected out of 6 will be
Fitting a Binomial Distribution
When a binomial distribution is to be fitted to the observed data, the following procedure is adopted:
The generalization of the binomial distribution is the multinomial distribution. Whereas in case of binomial distribution, there are only two possible outcomes on each experimental trial, in the multinomial distribution there are more than two possible outcomes on each trial.
The assumptions underlying the multinomial distribution are analogous to the binomial distribution. These are:
Poisson Distribution
Poisson distribution is a discrete probability distribution and is very widely used in statistical work. This distribution is the limiting form of the binomial distribution as n becomes infinitely large and p approaches to zero in such a way that
remains constant. A Poisson distribution may be expected in cases where the change of any individual event being a success is small. The distribution is used to describe the behaviour of rare events.
Definition: A random variable X is said to follow a Poisson distribution with parameter
if the probability function is given by
Constants of Poisson Distribution
The various constants of the Poisson distribution are
Properties of Poisson Distribution
Importance of Poisson Distribution
In general, the Poisson distribution explains the behaviour of discrete variates where the probability of occurrence of the event is small and total number of possible cases is sufficiently large.
For example, it is used in quality control statistics to count the number of defects of an item, or in biology to count the number of bacteria, or in physics to count the number of particles emitted from a radioactive substance, or in insurance problems to count the number of casualties etc.
The Poisson distribution is also used in problems dealing with the inspection of manufactured products with the probability that any piece is defective is very small and the lots are very large. Also used to know the probability of mutations in a DNA segment.
The only variable needed to generate this distribution is
, the average occurrence/interval. Moreover, in biology situations often occur where knowing the probability of no events P(0) in an interval is useful. When x = 0, equation simplifies to:
For example, we might want to know the fraction of uninfected cells for a known average (l) multiplicity of virus infection (MOI). We may need to know the average mutation rate/base pair, but our sequencing determines nearly all wild type sequence, P(0). In each case, if we can determine either
or P(0), we can solve for the other.
The Standard Deviation (SD): The uncertainty (expressed as ±1 SD) in the measurement of a number of random events equals the square root of the total number of events i.e.
Radioactive decay and its detection is used to illustrate this feature of the Poisson distribution for two reasons. Most biologists have some experience with radioactivity measurements; more important, radioactive decay is a true random process. In fact, it is the only truly random process known in nature. For this latter reason, we can make confident predictions about its behavior.
Suppose there is a radioactive sample that registers about 1000 cpm. The measurements are to be reported along with an uncertainty expressed as a standard deviation (SD). We could count the sample 10 times for one minute each and then calculate the mean and SD of the 10 determinations.
However, the important property of processes described by the Poisson distribution is that the SD is the square root of the total counts registered. To illustrate, the table shows the results of counting the radioactive sample for different time intervals (with some artificial variability thrown in).
Reported cpm is Total Counts/Time; SD (in cpm) is SD (counts)/Time; and Relative Error is SD (in cpm)/Reported cpm, expressed as %.
Comparing every other line shows that a 100-fold increase in counting time increases SD, but only by 10-fold. At the same time, the relative error decreases by 10-fold.
The general point here is that the experimenter can report the 1000 cpm value to any degree of precision desired simply by choosing the appropriate time interval for measurement.
There is no advantage whatever in using multiple counting periods. Thus, counting error is distinguished from experimental error in that the latter can only be estimated with multiple measurements.
Fitting a Poisson Distribution
The process of fitting a Poisson distribution involves obtaining the value of l, i.e., the average occurrence, and to calculate the frequency of 0 success. The other frequencies can be very easily calculated as follows:
Negative Binomial Distribution
The negative binomial distribution is very much similar to the binomial probability model. It is applicable when the following conditions hold good:
An experiment is performed under the same conditions till a fixed number of successes, say c, are
In each trial, there are only two possible outcomes of the experiment ‘success’ or ‘failure’
The probability of a success denoted by p remains constant from trial to
The trials are independent i.e. the outcome of any trial or sequence of trials do not affect the outcomes of subsequent
The only difference between the binomial model and the negative binomial model is about the first condition.
Consider a sequence of Bernoulli trials with p as the probability of success. In the sequence, success and failure will occur randomly and in each trial the probability of success will be p. Let us investigate how much time will be taken to reach the rth success. Here r is fixed, let the number of failures preceding the rth success be x (=0,1,…). The total number of trials to be performed to reach the rth success will be x+r. Then the probability that rth success occurs at (x+r)th trial is
Suppose that 30% of the items taken from the end of a production line are defective. If the items taken from the line are checked until 6 defective items are found, what is the probability that 12 items are examined?
Solution: Suppose the occurrence of a defective item is a success. Then the probability that there will be 6 failures preceding the 6th success will be given by:
Note: If we take r=1, i.e. the first success, then P[X=x] = pqx, x=0,1,2,… which is the probability distribution of X, the number of failures preceding the first success. This distribution is called as Geometric distribution.
The hypergeometric distribution occupies a place of great significance in statistical theory. It applies to sampling without replacement from a finite population whose elements can be classified into two categories – one which possess a certain characteristic and another which does not possess that characteristic.
The categories could be male, female, employed unemployed etc. When n random selections are made without replacement from the population, each subsequent draw is dependent and the probability of success changes on each draw. The following conditions characterise the hypergeometric distribution:
The hypergeometric distribution which gives the probability of r successes in a random sample of n elements drawn without replacement is;
The symbol [n, X] means the smaller of n or X. This distribution may be used to estimate the number of wild animals in forests or to estimate the number of fish in a lake. The hypergeometric distribution bears a very interesting relationship to the binomial distribution.
When N increases without limit, the hypergeometric distribution approaches the binomial distribution. Hence, the binomial probabilities may be used as approximation to hypergeometric probabilities when n/N is small.
Continuous Probability Distribution
Normal Distribution
The normal distribution is the most important distribution in Statistics. It is a probability distribution of a continuous random variable and is often used to model the distribution of discrete random variable as well as the distribution of other continuous random variables.
The basic form of normal distribution is that of a bell, it has single mode and is symmetric about its central values. The flexibility of using normal distribution is due to the fact that the curve may be centered over any number on the real line and it may be made flat or peaked to correspond to the amount of dispersion in the values of random variable. The versatility in using the normal distribution as probability distribution model is depicted in
Shape of normal distribution
Many quantitative characteristics have distribution similar in form to the normal distribution’s bell shape. For example height and weight of people, the IQ of people, height of trees, length of leaves etc. are the type of measurements that produces a random variable that can be successfully approximated by normal random variable. The values of random variables are produced by a measuring process and measurements tend to cluster symmetrically about a central value.
Definition: A random variate X, with mean m and variance s2, is said to have normal distribution, if its probability density function is given by
on the domain
and s2 are parameters of the distribution and e is a mathematical constant equal to 2.7183.
Standard Normal Distribution: If X is a normal random variable with mean m and
standard deviation s, then
is a standard normal variate with zero mean and standard deviation 1. The probability density function of standard normal variable Z is
Area under the Normal Curve For normal variable X, P(a<X<b)=Area under y=f(x) from X=a to X=b is shown in Fig. 3.2.
Area representing P(a < X < .b) for a normal random variable
The probability that X is between a and b (b > a) can be determined by computing the probability that Z is between
. It is possible to determine the area in Fig. 3.2 by using tables (for areas under normal curve) rather than by performing any mathematical computations.
Probability associated with a normal random variable X can be determined from Table 1 given at the end.
As indicated in Fig. 3.3, for any normal distribution, 68.27% of the Z values lie within one standard deviation of mean, 95.45% of values lie within 2 standard deviations of mean and 99.73% of values lie within three standard deviations of mean. By using the fact that the normal distribution is symmetric about its mean (zero in this case) and the total area under curve is 1 (half to the left of zero and half to right), probabilities of standard normal variable of the form P(0<Z<.a) are provided in Table 1 in the end. Using this table, probabilities that Z lies in any interval on real line may be determined.
Properties of Normal Distribution
Importance of Normal Distribution
Sampling Distributions
The word population or universe in Statistics is used to refer to any collection of individuals or of their attributes or of results of operations which can be numerically specified. Thus, we may speak of the populations of weights, heights of trees, prices of wheat, etc.
A population with finite number of individuals or members is called a finite population. For instance, the population of ages of twenty boys in a class is an example of finite population. A population with infinite number of members is known as infinite population. The population of pressures at various points in the atmosphere is an example of infinite population.
A part or small section selected from the population is called a sample and the process of such selection is called sampling. Sampling is resorted to when either it is impossible to enumerate the whole population or when it is too costly to enumerate in terms of time and money or when the uncertainty inherent in sampling is more than compensated by the possibilities of errors in complete enumeration. To serve a useful purpose sampling should be unbiased and representative.
The aim of the theory of sampling is to get as much information as possible, ideally the whole of the information about the population from which the sample has been drawn. In particular, given the form of the parent population we would like to estimate the parameters of the population or specify the limits within which the population parameters are expected to lie with a specified degree of confidence.
It is, however, to be clearly understood that the logic of the theory of sampling is the logic of induction, that is we pass from particular (i.e., sample) to general (i.e., population) and hence all results will have to be expressed in terms of probability.
The fundamental assumption underlying most of the theory of sampling is random sampling which consists in selecting the individuals from the population in such a way that each individual of the population has the same chance of being selected.
Population and Sample Statistics
Definition: In a finite population of N values X1,X2,…,XN; of a population of characteristic X, the population mean
is defined as
and population standard deviation (s) is defined as
Definition: If a sample of n values x1 ,x2,…,xn is taken from a population set of values, the sample mean
is defined as
Sampling Distribution of Sample Mean
When different random samples are drawn and sample mean or standard deviation is computed, in general the computed statistics will not be same for all samples. Consider artificial example, where the population has four units 1,2,3,4 possessing the values 2,3,4,6 for the study variable. Then we will have 6 possible samples, if units are drawn without replacement. The possible samples with sample mean are given below:
Different possible samples of size 2 without replacement
Though sample means are not the same from sample to sample, the average of sample means is 3.75 which is the same as population mean. The variance of sample means is 0.73.
Theorem: If a random variable X is normally distributed with mean m and standard deviation
and a simple random sample of size n has been drawn, then the sample average is normally distributed (for all sample sizes n) with a mean m and standard deviation 
Central limit theorem: Let x1,x2,…,xn be a simple random sample of size n drawn from an infinite population with a finite mean m and standard deviation
. Then random variable x has a limiting distribution that is normal with a mean m and standard deviation 
Definition: A random variable X is said to have Chi-square (x2) distribution with n degrees of freedom if its probability density function (p.d.f.) is
If samples of size n are drawn repeatedly from a normal population with variance
and the sample variance s2 is computed for each sample, we obtain the value of a statistic x2. The distribution of the random variable x2, called chi-square, defined by
is referred to as x2 distribution with n-1 degrees of freedom.
Let a .be the probability and let X have a x2 distribution with v degrees of freedom, then
Properties of. c2 Variate
Distribution of T is completely determined by number n. The value of
are given in Table 3. Graph of probability density function of T is symmetrical with respect to vertical axis t = 0. For t-distribution,
Mean = 0
When T has ‘t’ distribution with v degrees of freedom, ta (v) is the upper 100a percent point of t distribution with v degrees of freedom.
Let T have a t-distribution with 7 degrees of freedom, then from Table 3 we have
Let T have a t distribution with a variance of 5 / 4 (v = 10) then
One of the most important distributions in applied statistics is the F distribution. F distribution is defined to be the ratio of two independent chi-square variates, each divided by their respective degrees of freedom.
where
is the value of a chi-square distribution with v1 degrees of freedom and
is a value of a chi-square distribution with v2 degrees of freedom. The mathematical form of the p.d.f. of F distribution is.
possible f values is called F distribution .The number of degrees of freedom associated
with the sample variance in numerator is stated first, followed by the number of degrees of freedom associated with the sample variance in the denominator. Thus the curve of F distribution depends not only on the two parameters v1 and v2 but also on the order in which we state them. Once these two values are given we can identify the curve.
Let fa be the f value above which we find an area equal to a. Table 4 gives values of fa only for a = 0.05 and for various combinations of the degrees of freedom v1 and v2. Hence the f value with 6 and 10 degrees of freedom, leaving an area of 0.05 to the right, is f0.05=3.22.
The F-distribution is applied primarily in the analysis of variance, where we wish to test the equality of several means simultaneously. F-distribution is also used to make inferences concerning the variance of two normal populations.
Table 1
The Normal Probability Integral or Area under the Normal Curve
Z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 .0000 .0040 .0080 .0120 .0159 .0199 .0239 .0279 .0319 .0359
0.1 .0398 .0438 .0478 .0517 .0557 .0596 .0636 .0675 .0714 .0753
0.2 .0793 .0832 .0871 .0910 .0948 .0987 .1026 .1064 .1103 .1141
0.3 .1179 .1217 .1255 .1293 .1331 .1368 .1406 .1443 .1480 .1517
0.4 .1554 .1591 .1628 .1664 .1700 .1736 .1772 .1808 .1844 .1879
0.5 .1915 .1950 .1985 .2019 .2054 .2088 .2123 .2157 .2190 .2224
0.6 .2257 .2291 .2324 .2357 .2389 .2422 .2454 .2486 .2518 .2549
0.7 .2580 .2611 .2642 .2673 .2704 .2734 .2764 .2794 .2823 .2852
0.8 .2881 .2910 .2939 .2967 .2995 .3023 .3051 .3078 .3106 .3133
0.9 .3159 .3186 .3212 .3238 .3264 .3289 .3315 .3340 .3365 .3389
1.0 .3413 .3438 .3461 .3485 .3508 .3531 .3554 .3577 .3599 .3621
1.1 .3643 .3665 .3686 .3708 .3729 .3749 .3770 .3790 .3810 .3830
1.2 .3849 .3869 .3888 .3907 .3925 .3944 .3962 .3980 .3997 .4015
1.3 .4032 .4049 .4066 .4082 .4099 .4115 .4131 .4147 .4162 .4177
1.4 .4192 .4207 .4222 .4236 .4251 .4265 .4279 .4292 .4306 .4319
1.5 .4332 .4345 .4357 .4370 .4382 .4394 .4406 .4418 .4430 .4441
1.6 .4452 .4463 .4474 .4485 .4495 .4505 .4515 .4525 .4535 .4545
1.7 .4554 .4564 .4573 .4582 .4591 .4599 .4608 .4616 .4625 .4633
1.8 .4641 .4649 .4656 .4664 .4671 .4678 .4686 .4693 .4699 .4706
1.9 .4713 .4719 .4726 .4732 .4738 .4744 .4750 .4756 .4762 .4767
2.0 .4772 .4778 .4783 .4788 .4793 .4798 .4803 .4808 .4812 .4817
2.1 .4821 .4826 .4830 .4834 .4838 .4842 .4846 .4850 .4854 .4857
2.2 .4861 .4865 .4868 .4871 .4875 .4878 .4881 .4884 .4887 .4890
2.3 .4893 .4896 .4898 .4901 .4904 .4906 .4909 .4911 .4913 .4916
2.4 .4918 .4920 .4922 .4925 .4927 .4929 .4931 .4932 .4934 .4936
2.5 .4938 .4940 .4941 .4943 .4945 .4946 .4948 .4949 .4951 .4952
2.6 .4953 .4955 .4956 .4957 .4959 .4960 .4961 .4962 .4963 .4964
2.7 .4965 .4966 .4967 .4968 .4969 .4970 .4971 .4972 .4973 .4974
2.8 .4974 .4975 .4976 .4977 .4977 .4978 .4979 .4980 .4980 .4981
2.9 .4981 .4982 .4983 .4983 .4984 .4984 .4985 .4985 .4986 .4986
3.0 .4986 .4987 .4987 .4988 .4988 .4989 .4989 .4989 .4990 .4990
3.1 .4990 .4991 .4991 .4991 .4992 .4992 .4992 .4992 .4993 .4923
Table 2
Values of c2 with probability P of being exceeded in random sampling n = degrees of freedom
1
0.0002
0.004
0.46
1.07
1.64
2.71
3.84
6.64
Table 3
Value of mod.t with a probability of mean exceeded in random sampling v = degrees of freedom