File: README.org

package info (click to toggle)
apertium-swe-nor 0.2.0~r69544-2
  • links: PTS, VCS
  • area: main
  • in suites: buster
  • size: 5,952 kB
  • sloc: xml: 1,026; sh: 534; makefile: 188; awk: 160; python: 78
file content (47 lines) | stat: -rw-r--r-- 1,317 bytes parent folder | download | duplicates (13)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#+STARTUP: showall


* Setting up tests for a new language pair
To add these test scripts to your language pair, go into your language
pair directory and do:

#+BEGIN_SRC sh
git clone https://github.com/unhammer/apertium-wiki-tests t
# edit t/config.sh.in and save as t/config.sh
svn add --depth=files t
#+END_SRC

You should have your tests in a page named after your language pair, e.g.
http://wiki.apertium.org/wiki/apertium-sme-smj/Regression_tests
and
http://wiki.apertium.org/wiki/apertium-sme-smj/Pending_tests


* Running the tests

To run the tests from your language pair, assuming it's been set up as
shown above, do

#+BEGIN_SRC sh
  t/update-latest
#+END_SRC

This will overwrite the files named t/latest-pending.results and
t/latest-regression.results. You can view the differences with

#+BEGIN_SRC sh
  svn diff
#+END_SRC

Test results are kept in SVN since that means we don't have to keep
moving things back and forth between "Pending" and "Regression" in the
wiki whenever we pass a new test (or fail an old one), and we get a
nice log of our progression.

To run just regression or just pending tests, use t/pending-tests or
t/regression-tests. Pass the -f argument to those scripts to only see
failed regression tests or passed pending tests, e.g.

#+BEGIN_SRC sh
t/regression-tests -f
#+END_SRC