1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254
|
<?sdop
font_main="11,0,serif"
foot_length="0"
head_length="12"
margin_bottom="40"
margin_left_recto="20"
margin_left_verso="20"
page_line_width="540"
page_full_length="780"
?>
<literallayout class="monospaced">
MAINTENANCE README FOR PCRE
===========================
The files in the "maint" directory of the PCRE source contain data, scripts,
and programs that are used for the maintenance of PCRE, but which do not form
part of the PCRE distribution tarballs. This document describes these files and
also contains some notes for maintainers. Its contents are:
Files in the maint directory
Updating to a new Unicode release
Preparing for a PCRE release
Making a PCRE release
Long-term ideas (wish list)
Files in the maint directory
============================
---------------- This file is now OBSOLETE and no longer used ----------------
Builducptable A Perl script that creates the contents of the ucptable.h file
from two Unicode data files, which themselves are downloaded
from the Unicode web site. Run this script in the "maint"
directory.
---------------- This file is now OBSOLETE and no longer used ----------------
GenerateUtt.py A Python script to generate part of the pcre_tables.c file
that contains Unicode script names in a long string with
offsets, which is tedious to maintain by hand.
ManyConfigTests A shell script that runs "configure, make, test" a number of
times with different configuration settings.
MultiStage2.py A Python script that generates the file pcre_ucd.c from three
Unicode data tables, which are themselves downloaded from the
Unicode web site. Run this script in the "maint" directory.
The generated file contains the tables for a 2-stage lookup
of Unicode properties.
pcre_chartables.c.non-standard
This is a set of character tables that came from a Windows
system. It has characters greater than 128 that are set as
spaces, amongst other things. I kept it so that it can be
used for testing from time to time.
README This file.
Unicode.tables The files in this directory, DerivedGeneralCategory.txt,
Scripts.txt and UnicodeData.txt, were downloaded from the
Unicode web site. They contain information about Unicode
characters and scripts.
ucptest.c A short C program for testing the Unicode property macros
that do lookups in the pcre_ucd.c data, mainly useful after
rebuilding the Unicode property table. Compile and run this in
the "maint" directory (see comments at its head).
ucptestdata A directory containing two files, testinput1 and testoutput1,
to use in conjunction with the ucptest program.
utf8.c A short, freestanding C program for converting a Unicode code
point into a sequence of bytes in the UTF-8 encoding, and vice
versa. If its argument is a hex number such as 0x1234, it
outputs a list of the equivalent UTF-8 bytes. If its argument
is sequence of concatenated UTF-8 bytes (e.g. e188b4) it
treats them as a UTF-8 character and outputs the equivalent
code point in hex.
</literallayout>
<literallayout class="monospaced">
Updating to a new Unicode release
=================================
When there is a new release of Unicode, the files in Unicode.tables must be
refreshed from the web site. If the new version of Unicode adds new character
scripts, the source file ucp.h and both the MultiStage2.py and the
GenerateUtt.py scripts must be edited to add the new names. Then MultiStage2.py
can be run to generate a new version of pcre_ucd.c, and GenerateUtt.py can be
run to generate the tricky tables for inclusion in pcre_tables.c.
If MultiStage2.py gives the error "ValueError: list.index(x): x not in list",
the cause is usually a missing (or misspelt) name in the list of scripts. I
couldn't find a straightforward list of scripts on the Unicode site, but
there's a useful Wikipedia page that list them, and notes the Unicode version
in which they were introduced:
http://en.wikipedia.org/wiki/Unicode_scripts#Table_of_Unicode_scripts
The ucptest program can be compiled and used to check that the new tables in
pcre_ucd.c work properly, using the data files in ucptestdata to check a number
of test characters. The source file ucptest.c must be updated whenever new
Unicode script names are added.
Note also that both the pcresyntax.3 and pcrepattern.3 man pages contain lists
of Unicode script names.
</literallayout>
<literallayout class="monospaced">
Preparing for a PCRE release
============================
This section contains a checklist of things that I consult before building a
distribution for a new release.
. Ensure that the version number and version date are correct in configure.ac.
. If new build options have been added, ensure that they are added to the CMake
files as well as to the autoconf files.
. Run ./autogen.sh to ensure everything is up-to-date.
. Compile and test with many different config options, and combinations of
options. The maint/ManyConfigTests script now encapsulates this testing.
. Run perltest.pl on the test data for tests 1, 4, 6, and 11. The output should
match the PCRE test output, apart from the version identification at the
start of each test. The other tests are not Perl-compatible (they use various
PCRE-specific features or options).
. Test with valgrind by running "RunTest valgrind". There is also "RunGrepTest
valgrind", though that takes quite a long time.
. It is possible to test with the emulated memmove() function by undefining
HAVE_MEMMOVE and HAVE_BCOPY in config.h, though I do not do this often. You
may see a number of "pcre_memmove defined but not used" warnings for the
modules in which there is no call to memmove(). These can be ignored.
. Documentation: check AUTHORS, COPYING, ChangeLog (check version and date),
INSTALL, LICENCE, NEWS (check version and date), NON-UNIX-USE, and README.
Many of these won't need changing, but over the long term things do change.
. I used to test new releases myself on a number of different operating
systems, using different compilers as well. For example, on Solaris it is
helpful to test using Sun's cc compiler as a change from gcc. Adding
-xarch=v9 to the cc options does a 64-bit test, but it also needs -S 64 for
pcretest to increase the stack size for test 2. Since I retired I can no
longer do this, but instead I rely on putting out release candidates for
folks on the pcre-dev list to test.
</literallayout>
<literallayout class="monospaced">
Making a PCRE release
=====================
Run PrepareRelease and commit the files that it changes (by removing trailing
spaces). The first thing this script does is to run CheckMan on the man pages;
if it finds any markup errors, it reports them and then aborts.
Once PrepareRelease has run clean, run "make distcheck" to create the tarballs
and the zipball. Double-check with "svn status", then create an SVN tagged
copy:
svn copy svn://vcs.exim.org/pcre/code/trunk \
svn://vcs.exim.org/pcre/code/tags/pcre-8.xx
Don't forget to update Freshmeat when the new release is out, and to tell
webmaster@pcre.org and the mailing list. Also, update the list of version
numbers in Bugzilla (edit products).
</literallayout>
<literallayout class="monospaced">
Future ideas (wish list)
========================
This section records a list of ideas so that they do not get forgotten. They
vary enormously in their usefulness and potential for implementation. Some are
very sensible; some are rather wacky. Some have been on this list for years;
others are relatively new.
. Optimization
There are always ideas for new optimizations so as to speed up pattern
matching. Most of them try to save work by recognizing a non-match without
having to scan all the possibilities. These are some that I've recorded:
* /((A{0,5}){0,5}){0,5}(something complex)/ on a non-matching string is very
slow, though Perl is fast. Can we speed up somehow? Convert to {0,125}?
OTOH, this is pathological - the user could easily fix it.
* Turn ={4} into ==== ? (for speed). I once did an experiment, and it seems
to have little effect, and maybe makes things worse.
* "Ends with literal string" - note that a single character doesn't gain much
over the existing "required byte" (reqbyte) feature that just remembers one
byte.
* These probably need to go in pcre_study():
o Remember an initial string rather than just 1 char?
o A required byte from alternatives - not just the last char, but an
earlier one if common to all alternatives.
o Friedl contains other ideas.
* pcre_study() does not set initial byte flags for Unicode property types
such as \p; I don't know how much benefit there would be for, for example,
setting the bits for 0-9 and all bytes >= xC0 when a pattern starts with
\p{N}.
* There is scope for more "auto-possessifying" in connection with \p and \P.
. If Perl gets to a consistent state over the settings of capturing sub-
patterns inside repeats, see if we can match it. One example of the
difference is the matching of /(main(O)?)+/ against mainOmain, where PCRE
leaves $2 set. In Perl, it's unset. Changing this in PCRE will be very hard
because I think it needs much more state to be remembered.
. Perl 6 will be a revolution. Is it a revolution too far for PCRE?
. Unicode
* There has been a request for direct support of 16-bit characters and
UTF-16 (Bugzilla #1049). However, since Unicode is moving beyond purely
16-bit characters, is this worth it at all? One possible way of handling
16-bit characters would be to "load" them in the same way that UTF-8
characters are loaded. Another possibility is to provide a set of
translation functions, and build an index during translation so that the
returned offsets can automatically be translated (using the index) after a
match.
* A different approach to Unicode might be to use a typedef to do everything
in unsigned shorts instead of unsigned chars. Actually, we'd have to have a
new typedef to distinguish data from bits of compiled pattern that are in
bytes, I think. There would need to be conversion functions in and out. I
don't think this is particularly trivial - and anyway, Unicode now has
characters that need more than 16 bits, so is this at all sensible? I
suspect not.
. Allow errorptr and erroroffset to be NULL. I don't like this idea.
. Line endings:
* Option to use NUL as a line terminator in subject strings. This could now
be done relatively easily since the extension to support LF, CR, and CRLF.
If it is done, a suitable option for pcregrep is also required.
. Option to provide the pattern with a length instead of with a NUL terminator.
This affects quite a few places in the code and is not trivial.
</literallayout>
|