File: analysing_one_sample.rst

package info (click to toggle)
scipy 1.16.3-4
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 236,092 kB
  • sloc: cpp: 503,720; python: 345,302; ansic: 195,677; javascript: 89,566; fortran: 56,210; cs: 3,081; f90: 1,150; sh: 857; makefile: 792; pascal: 284; csh: 135; lisp: 134; xml: 56; perl: 51
file content (234 lines) | stat: -rw-r--r-- 10,984 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
Analysing one sample
--------------------

First, we create some random variables. We set a seed so that in each run
we get identical results to look at. As an example we take a sample from
the Student t distribution:

    >>> import numpy as np
    >>> import scipy.stats as stats
    >>> x = stats.t.rvs(10, size=1000)

Here, we set the required shape parameter of the t distribution, which
in statistics corresponds to the degrees of freedom, to 10. Using size=1000 means
that our sample consists of 1000 independently drawn (pseudo) random numbers.
Since we did not specify the keyword arguments `loc` and `scale`, those are
set to their default values zero and one.

Descriptive statistics
^^^^^^^^^^^^^^^^^^^^^^

`x` is a numpy array, and we have direct access to all array methods, e.g.,

    >>> print(x.min())   # equivalent to np.min(x)
    -3.78975572422  # random
    >>> print(x.max())   # equivalent to np.max(x)
    5.26327732981  # random
    >>> print(x.mean())  # equivalent to np.mean(x)
    0.0140610663985  # random
    >>> print(x.var())   # equivalent to np.var(x))
    1.28899386208  # random

How do the sample properties compare to their theoretical counterparts?

    >>> m, v, s, k = stats.t.stats(10, moments='mvsk')
    >>> n, (smin, smax), sm, sv, ss, sk = stats.describe(x)

    >>> sstr = '%-14s mean = %6.4f, variance = %6.4f, skew = %6.4f, kurtosis = %6.4f'
    >>> print(sstr % ('distribution:', m, v, s ,k))
    distribution:  mean = 0.0000, variance = 1.2500, skew = 0.0000, kurtosis = 1.0000  # random
    >>> print(sstr % ('sample:', sm, sv, ss, sk))
    sample:        mean = 0.0141, variance = 1.2903, skew = 0.2165, kurtosis = 1.0556  # random

Note: `stats.describe` uses the unbiased estimator for the variance, while
np.var is the biased estimator.


For our sample the sample statistics differ a by a small amount from
their theoretical counterparts.


T-test and KS-test
^^^^^^^^^^^^^^^^^^

We can use the t-test to test whether the mean of our sample differs
in a statistically significant way from the theoretical expectation.

    >>> print('t-statistic = %6.3f pvalue = %6.4f' %  stats.ttest_1samp(x, m))
    t-statistic =  0.391 pvalue = 0.6955  # random

The pvalue is 0.7, this means that with an alpha error of, for
example, 10%, we cannot reject the hypothesis that the sample mean
is equal to zero, the expectation of the standard t-distribution.


As an exercise, we can calculate our ttest also directly without
using the provided function, which should give us the same answer,
and so it does:

    >>> tt = (sm-m)/np.sqrt(sv/float(n))  # t-statistic for mean
    >>> pval = stats.t.sf(np.abs(tt), n-1)*2  # two-sided pvalue = Prob(abs(t)>tt)
    >>> print('t-statistic = %6.3f pvalue = %6.4f' % (tt, pval))
    t-statistic =  0.391 pvalue = 0.6955  # random

The Kolmogorov-Smirnov test can be used to test the hypothesis that
the sample comes from the standard t-distribution

    >>> print('KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x, 't', (10,)))
    KS-statistic D =  0.016 pvalue = 0.9571  # random

Again, the p-value is high enough that we cannot reject the
hypothesis that the random sample really is distributed according to the
t-distribution. In real applications, we don't know what the
underlying distribution is. If we perform the Kolmogorov-Smirnov
test of our sample against the standard normal distribution, then we
also cannot reject the hypothesis that our sample was generated by the
normal distribution given that, in this example, the p-value is almost 40%.

    >>> print('KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x, 'norm'))
    KS-statistic D =  0.028 pvalue = 0.3918  # random

However, the standard normal distribution has a variance of 1, while our
sample has a variance of 1.29. If we standardize our sample and test it
against the normal distribution, then the p-value is again large enough
that we cannot reject the hypothesis that the sample came form the
normal distribution.

    >>> d, pval = stats.kstest((x-x.mean())/x.std(), 'norm')
    >>> print('KS-statistic D = %6.3f pvalue = %6.4f' % (d, pval))
    KS-statistic D =  0.032 pvalue = 0.2397  # random

Note: The Kolmogorov-Smirnov test assumes that we test against a
distribution with given parameters, since, in the last case, we
estimated mean and variance, this assumption is violated and the
distribution of the test statistic, on which the p-value is based, is
not correct.

Tails of the distribution
^^^^^^^^^^^^^^^^^^^^^^^^^

Finally, we can check the upper tail of the distribution. We can use
the percent point function ppf, which is the inverse of the cdf
function, to obtain the critical values, or, more directly, we can use
the inverse of the survival function

    >>> crit01, crit05, crit10 = stats.t.ppf([1-0.01, 1-0.05, 1-0.10], 10)
    >>> print('critical values from ppf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f' % (crit01, crit05, crit10))
    critical values from ppf at 1%, 5% and 10%   2.7638   1.8125   1.3722
    >>> print('critical values from isf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f' % tuple(stats.t.isf([0.01,0.05,0.10],10)))
    critical values from isf at 1%, 5% and 10%   2.7638   1.8125   1.3722

    >>> freq01 = np.sum(x>crit01) / float(n) * 100
    >>> freq05 = np.sum(x>crit05) / float(n) * 100
    >>> freq10 = np.sum(x>crit10) / float(n) * 100
    >>> print('sample %%-frequency at 1%%, 5%% and 10%% tail %8.4f %8.4f %8.4f' % (freq01, freq05, freq10))
    sample %-frequency at 1%, 5% and 10% tail   1.4000   5.8000  10.5000  # random

In all three cases, our sample has more weight in the top tail than the
underlying distribution.
We can briefly check a larger sample to see if we get a closer match. In this
case, the empirical frequency is quite close to the theoretical probability,
but if we repeat this several times, the fluctuations are still pretty large.

    >>> freq05l = np.sum(stats.t.rvs(10, size=10000) > crit05) / 10000.0 * 100
    >>> print('larger sample %%-frequency at 5%% tail %8.4f' % freq05l)
    larger sample %-frequency at 5% tail   4.8000  # random

We can also compare it with the tail of the normal distribution, which
has less weight in the tails:

    >>> print('tail prob. of normal at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f' %
    ...       tuple(stats.norm.sf([crit01, crit05, crit10])*100))
    tail prob. of normal at 1%, 5% and 10%   0.2857   3.4957   8.5003

The chisquare test can be used to test whether for a finite number of bins,
the observed frequencies differ significantly from the probabilities of the
hypothesized distribution.

    >>> quantiles = [0.0, 0.01, 0.05, 0.1, 1-0.10, 1-0.05, 1-0.01, 1.0]
    >>> crit = stats.t.ppf(quantiles, 10)
    >>> crit
    array([       -inf, -2.76376946, -1.81246112, -1.37218364,  1.37218364,
            1.81246112,  2.76376946,         inf])
    >>> n_sample = x.size
    >>> freqcount = np.histogram(x, bins=crit)[0]
    >>> tprob = np.diff(quantiles)
    >>> nprob = np.diff(stats.norm.cdf(crit))
    >>> tch, tpval = stats.chisquare(freqcount, tprob*n_sample)
    >>> nch, npval = stats.chisquare(freqcount, nprob*n_sample)
    >>> print('chisquare for t:      chi2 = %6.2f pvalue = %6.4f' % (tch, tpval))
    chisquare for t:      chi2 =  2.30 pvalue = 0.8901  # random
    >>> print('chisquare for normal: chi2 = %6.2f pvalue = %6.4f' % (nch, npval))
    chisquare for normal: chi2 = 64.60 pvalue = 0.0000  # random

We see that the standard normal distribution is clearly rejected, while the
standard t-distribution cannot be rejected. Since the variance of our sample
differs from both standard distributions, we can again redo the test taking
the estimate for scale and location into account.

The fit method of the distributions can be used to estimate the parameters
of the distribution, and the test is repeated using probabilities of the
estimated distribution.

    >>> tdof, tloc, tscale = stats.t.fit(x)
    >>> nloc, nscale = stats.norm.fit(x)
    >>> tprob = np.diff(stats.t.cdf(crit, tdof, loc=tloc, scale=tscale))
    >>> nprob = np.diff(stats.norm.cdf(crit, loc=nloc, scale=nscale))
    >>> tch, tpval = stats.chisquare(freqcount, tprob*n_sample)
    >>> nch, npval = stats.chisquare(freqcount, nprob*n_sample)
    >>> print('chisquare for t:      chi2 = %6.2f pvalue = %6.4f' % (tch, tpval))
    chisquare for t:      chi2 =  1.58 pvalue = 0.9542  # random
    >>> print('chisquare for normal: chi2 = %6.2f pvalue = %6.4f' % (nch, npval))
    chisquare for normal: chi2 = 11.08 pvalue = 0.0858  # random

Taking account of the estimated parameters, we can still reject the
hypothesis that our sample came from a normal distribution (at the 5% level),
but again, with a p-value of 0.95, we cannot reject the t-distribution.


Special tests for normal distributions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Since the normal distribution is the most common distribution in statistics,
there are several additional functions available to test whether a sample
could have been drawn from a normal distribution.

First, we can test if skew and kurtosis of our sample differ significantly from
those of a normal distribution:

    >>> print('normal skewtest teststat = %6.3f pvalue = %6.4f' % stats.skewtest(x))
    normal skewtest teststat =  2.785 pvalue = 0.0054  # random
    >>> print('normal kurtosistest teststat = %6.3f pvalue = %6.4f' % stats.kurtosistest(x))
    normal kurtosistest teststat =  4.757 pvalue = 0.0000  # random

These two tests are combined in the normality test

    >>> print('normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(x))
    normaltest teststat = 30.379 pvalue = 0.0000  # random

In all three tests, the p-values are very low and we can reject the hypothesis
that the our sample has skew and kurtosis of the normal distribution.

Since skew and kurtosis of our sample are based on central moments, we get
exactly the same results if we test the standardized sample:

    >>> print('normaltest teststat = %6.3f pvalue = %6.4f' %
    ...       stats.normaltest((x-x.mean())/x.std()))
    normaltest teststat = 30.379 pvalue = 0.0000  # random

Because normality is rejected so strongly, we can check whether the
normaltest gives reasonable results for other cases:

    >>> print('normaltest teststat = %6.3f pvalue = %6.4f' %
    ...       stats.normaltest(stats.t.rvs(10, size=100)))
    normaltest teststat =  4.698 pvalue = 0.0955  # random
    >>> print('normaltest teststat = %6.3f pvalue = %6.4f' %
    ...              stats.normaltest(stats.norm.rvs(size=1000)))
    normaltest teststat =  0.613 pvalue = 0.7361  # random

When testing for normality of a small sample of t-distributed observations
and a large sample of normal-distributed observations, then in neither case
can we reject the null hypothesis that the sample comes from a normal
distribution. In the first case, this is because the test is not powerful
enough to distinguish a t and a normally distributed random variable in a
small sample.