File: README

package info (click to toggle)
gretl 1.3.3-1
  • links: PTS
  • area: main
  • in suites: sarge
  • size: 20,544 kB
  • ctags: 11,029
  • sloc: ansic: 156,263; xml: 31,443; sh: 8,558; makefile: 1,749; lisp: 1,120; perl: 911
file content (91 lines) | stat: -rw-r--r-- 3,998 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
Here's a checker program that runs libgretl against the NIST reference
datasets for linear regression, for details on which see

http://www.nist.gov/itl/div898/strd/general/main.html

The checker validates libgretl and (if available) the gretl multiple
precision plugin.

There's also a tester for the Pseudo-Random Number Generator (PRNG) in
gretl -- see below.

And there's a sub-dir named nist-nls with a rig for checking gretl's
nonlinear regression code against the NIST reference datasets.

NIST Results with libgretl
==========================

If all is well there should be exactly one failure, namely gretl
will not be able to obtain estimates on the Filip model (a 10th degree
polynomial which exhibits very high collinearity).  Any other failures
should be cause for concern.

The checker program reads the "certified values" and data out of each
NIST file, runs a regression using libgretl, and compares the libgretl
estimates with the certified values, starting at a precision of 9
significant figures.  If there is any disagreement, the comparison is
repeated using 8 significant figures, and so on.  If the figures do
not agree at a precision of at least 6 figures, the test is deemed to
have been failed and an error message is printed.

You can get more details on gretl's performance on the tests by
running the 'nistcheck' program in verbose mode (-v flag) or very
verbose mode (-vv).  

NIST Results with multiple precision plugin
===========================================

If the Gnu Multiple Precision library (GMP) was detected when gretl
was configured, and gretl's GMP plugin was built, then each NIST test
will be repeated using the plugin.  In this case the Filip model
should be estimated without difficulty.  All tests should give results
that agree with the NIST certified values to at least 12 significant
figures.

Note that the test data sets Wampler1 and Wampler2 give, by
construction, an exact polynomial fit.  This is difficult to
reproduce.  If you run the checker program in verbose mode you will
see that instead of standard errors of precisely zero, values around
10^{-39} are reported.  You can, however, drive these to zero by
raising the precision used in the calculations.  The default for
gretl's multiple precision plugin is to assign 256 bits for the
representation of each floating point number, which is enough for most
purposes.  You can increase this value by setting the environment
variable GRETL_MP_BITS.  On my IBM Thinkpad A21m (pentium 3), I find I
can get zero standard errors on Wampler1 and Wampler2 by running the
checker thus:

GRETL_MP_BITS=4096 ./nistcheck -v


Check on gretl's PRNG
=====================

Also included here is a slightly modified version of George
Marsaglia's "diehard" tester for Pseudo-Random Number Generators.
(The modifications do not concern the substance of Marsaglia's test
suite; they are purely to do with convenience when running his program
as a check on gretl.)

To run this set of tests, do "make randcheck".  It's possible you may
have to edit the Makefile, if the "diehard" program is not produced
correctly (perhaps modifying the libraries that are required to link
what was originally a FORTRAN program).  If the test programs build
OK, you should find in this directory a large (around 11MB) file of
pseudo-random unsigned integers generated via gretl's PRNG, and a file
named "gretl_rand.txt" containing a summary of the analysis.  (You
should also see the results zipping by on standard output, probably
too fast to read.)  Basically, you are looking for p-values from the
various tests uniformly distributed on [0,1).  If you should see
blocks of p-values of 0.0 or 1.0, something is seriously wrong.

Doing "make clean" in here will delete the big file of random numbers,
along with the auxiliary programs.

For more details on the tests see marsaglia.txt or diehard.txt, or
visit the web site from which these tests derive at

http://stat.fsu.edu/pub/diehard/

Allin Cottrell
March 2003