1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133
|
<?xml version="1.0" encoding="ISO-8859-1"?>
<document>
<head>
<name>Source Code and Self-Documentation</name>
<doc-version>$Date: 2003/05/04 06:40:13 $</doc-version>
<author>Matt Albrecht</author>
</head>
<body>
<P>
The XP movement posits that documentation on the system design is the source
code of the system. Keeping external documentation and the system which it
documents synchronized requires far too much discipline than most programmers
have.
</P>
<P>
So why not put the documentation in the code? That's what JavaDoc is for -
to document the structure of the system as it currently exists.
</P>
<P>
But why stop there? There are usually many components in a project which
are found necessary (such as a bug tracker), yet are disjoint from the
source code. Many times, these artifacts are dependent upon dynamic analysis
of the system, rather than upon static analysis such as JavaDoc and Lint.
The perfect candidates to receive the attention are tests. They should generate
dynamic analysis of the system every time a formal build is performed, such
as pass/fail statistics.
</P>
<!-- this might be better off in its own document -->
<H2>Issue Traceability</H2>
<P>
JUnit tests have been a boon to Java; it is a testing framework that has been
nearly universally accepted in project lifecycles. When combined with the
XP model of debugging (when a bug is discovered you write a test
to simulate the bug, then fix the bug), we have a method for asserting that
all bugs marked as "fixed" are indeed fixed, since the JUnit tests pass.
</P>
<P>
The XP way of thinking states that the code is the system design documentation.
Right now, it views unit tests as documenting how a unit can be used properly
and improperly.
</P>
<P>
Also, bug tracking software has become a cheap and
easy product to add to any development cycle, partially due to the
Open Source movement. Even though there are many differences between the
various products, the fundamentals are the same:
<OL>
<LI>bugs are entered into the system</LI>
<LI>new bugs are assigned a unique identifier</LI>
<LI>each bug may be given categorization meta-data</LI>
<LI>each bug allows for user comments to be added</LI>
</OL>
</P>
<section>The Next Good Thing</section>
<P>
What happens when we combine these together? We can directly relate a bug
to the test case which tests the bug. The tests now automatically document
the bugs based on the results of the testcases which test it. If a bug
is marked as "fixed", but its corresponding test case fails, then we can
have the system reopen the bug. If the test case is no longer valid
(that is, its tested unit disappears), then the bug is no longer valid.
</P>
<P>
If one is so inclined, the tests can now document the work done on the code.
When a requirement is created, the code does not exist to fulfil that
requirement, so it is a bug in the current system. So enter the bug into
the bug tracking software. When the developer completes the work for that
requirement, then the developer should also have tests which test the work.
These initial tests then also should be marked as testing that the requirement
has been fulfilled.
</P>
<P>
So now each bug has four general categories: no tests exist for the bug;
the bug has one test (or more), but not all of the tests pass; all of the bug's
tests pass; and the bug is validated or closed. We can't let the software
automatically mark the passed-test bugs as closed, since the tests may not
be robust enough, but we can have the software reopen issues if a once working
bug's tests suddenly begin to fail.
</P>
<P>
The tests-to-bug relationship can be many-to-many, but ideally it should be
one or more tests to one bug.
</P>
<section>Caveats</section>
<P>
These advantages don't come for free. Proper organization of the issues as
well as code may seem difficult to maintain.
</P>
<P>
The bug tracking interface used to report on a test-run should be configured
to reference only a single release. Also, the source code should be properly
branched between releases if their bugs are different, and each release should
be updated with the correct bug information.
</P>
<P>
Just like refactoring your code to make tests easier to write, your bug list
should be maintained to match what's really going on in the source. If you
find the maintenance of multiple software versions with bug-tracking IDs too
difficult, it may be time to rethink the current SCM methodology, and move
to a more scalable solution.
</P>
<!-- this should probably be in its own document, too -->
<H2>Test Case Step Documentation</H2>
<P>
While nothing can replace a solid test plan for a project, the documentation
of the individual test cases can be tedious. If the goal is to have automated
tests for each test case, then, like most software documentation, it becomes
too easy to have the automated tests become out-of-sync with the test
documents.
</P>
<P>
Let's use the same techniques listed above for documenting our test cases.
Since the documentation would be generated with the test execution, the
documentation can create a test report, listing each step executed, the
date of execution, the duration of the run, the test results, and archives of
the test artifacts.
</P>
<P>
While the test code would declare the process for the test, the execution of
the test would generate the test case run results.
</P>
</body>
</document>
|