1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259
|
<?xml version="1.0" encoding="UTF-8"?>
<book xmlns:xi="http://www.w3.org/2001/XInclude">
<bookinfo>
<title>Synopsis Tutorial</title>
<releaseinfo>Version 0.12</releaseinfo>
<author>
<firstname>Stefan</firstname>
<surname>Seefeld</surname>
</author>
</bookinfo>
<chapter id="intro">
<title>Introduction</title>
<para>
Synopsis is a source code introspection tool. It provides parsers for a variety
of programming languages (C, C++, Python, IDL), and generates internal representations
of varying granularity. The only <emphasis>stable</emphasis> representation, which
is currently used among others to generate documentation, is an Abstract Semantic Graph.
</para>
<para>
This tutorial is focussed on the ASG and the concepts around it. Other representations
are presently being worked on, notably in relation to the C++ parser. To learn more
about those (Parse Tree, Symbol Table, etc.) see the
<ulink url="../DevGuide/index.html">Developer's Guide</ulink>.
</para>
<section id="inspecting">
<title>Inspecting Code</title>
<!-- Talk about the problem domain:
code documentation, software metrics, etc. -->
<para></para>
</section>
<section id="ir">
<title>Internal Representations</title>
<para>Synopsis parses source code into a variety of <emphasis>internal representations</emphasis> (IRs),
which then are manipulated in various ways, before some output (such as a cross-referenced API
documentation) is generated by an appropriate <emphasis>formatter</emphasis>.</para>
<para>At the core of Synopsis are a set of programming-language independent IRs which
all <emphasis>parser frontends</emphasis> generate. One of these representations is the
<emphasis>Abstract Semantic Graph</emphasis>, which stores declarations and their relationships.
Another is the <emphasis>SXR</emphasis> Symbol Table, which stores information about symbols and their use
in the source code. Other representations exist (such as the C++ Parse Tree), but they are not yet stored
in a publicly accessible form.</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/ir.svg" format="SVG" scale="80" />
</imageobject>
<imageobject>
<imagedata fileref="images/ir.png" format="PNG" scale="80" />
</imageobject>
</mediaobject>
<para>For details about the ASG, see <xref linkend="asg" /></para>
<para>At this time, the C++ frontend's IRs (PTree, SymbolTable, etc.) are not yet accessible through python,
though they eventually will be, making it possible to use Synopsis as a source-to-source compiler. To learn
more about the evolving C & C++ parser and its IRs, see the
<ulink url="../DevGuide/index.html">Developer's Guide</ulink>.</para>
</section>
<section id="documenting">
<title>Documenting Source-Code</title>
<para>Being read and understood is at least as important for source code
as it is for it to be processed by a computer. Humans have to maintain
the code, i.e. fix bugs, add features, etc.</para>
<para>Therefor, typically, code is annotated in some form in that adds
explanation if it isn't self-explanatory. While comments are often
used to simply disable the execution of a particular chunk of code, some
comments are specifically addressed at readers to explain what the
surrounding code does. While some languages (e.g. Python) have built-in
support for <emphasis>doc-strings</emphasis>, in other languages ordinary
comments are used.</para>
<para>Typically, comments are marked up in a specific way to discriminate
documentation from ordinary comments. Further the content of such comments
may contain markup for a particular formatting (say, embedded HTML).</para>
<example><title>Typical C++ code documentation</title>
<caption>
<para>C++ may contain a mix of comments, some representing documentation.
</para>
</caption>
<screen>
//! A friendly function.
void greet()
{
// FIXME: Use gettext for i18n
std::cout << "hello world !" << std::endl;
}
</screen>
</example>
<para>In Synopsis all declarations may be annotated. C and C++ parsers,
for example, will store comments preceding a given declaration in
that declaration's <varname>annotations</varname> dictionary under the
key <constant>comments</constant>. Later these comments may be translated
into documentation (stored under the key <constant>doc</constant>), which
may be formatted once the final document is generated.</para>
<para>Translating comments into doc-strings involves the removal of comment
markers (such as the <code>//!</code> above), as well as the handling of
processing instructions that may be embedded in comments, too.</para>
<para>For languages such as Python such a translation isn't necessary,
as the language has built-in support for documentation, and thus the
parser itself can generate the 'doc' annotations.</para>
<example><title>Python code documentation</title>
<caption>
<para>Python has support for documentation built into the language.</para>
</caption>
<screen>
>>> def greet():
... """The greet function prints out a famous message."""
... print 'hello world !'
...
>>>help(greet)
Help on function greet in module __main__:
greet()
The greet function prints out a famous message.
</screen>
</example>
</section>
<section id="processing">
<title>The Synopsis Processing Pipeline</title>
<para>Synopsis provides a large number of <emphasis>processor</emphasis>
types that all generate or operate on data extracted from source code.
Parsers parse source code from a variety of languages, linkers combine
multiple IRs, resolving cross-references between symbols, and formatters
format the ASG into a variety of output media.</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/cross-reference.svg" format="SVG" scale="80" />
</imageobject>
<imageobject>
<imagedata fileref="images/cross-reference.png" format="PNG" scale="80" />
</imageobject>
<caption>
<para>A typical processing-pipeline to generate API Documentation
with source-code cross-references.</para>
</caption>
</mediaobject>
<para>All these <type>Processor</type> types share a common design, to make
it easy to combine them into pipelines, and add custom processors. For more
documentation about this architecture, see <xref linkend="pipeline" />.</para>
</section>
</chapter>
<chapter id="using">
<title>Using the synopsis tool</title>
<para>In this section we are going to explore the possibilities
to generate documentation from source code. We will demonstrate
how to use synopsis standalone as well as in conjunction with
existing build systems. Further, we will see how to adapt
synopsis to your coding and commenting style, as well as how
to generate the output in a format and style that fulfills
your needs.</para>
<section id="options">
<title>Option Handling</title>
<para>The synopsis tool combines three optional types of processors: parsers (specified with the
<option>-p</option> option), linker processors (specified with the <option>-l</option> option,
and formatters (specified with the <option>-f</option> option). If a parser is selected,
any input is interpreted as source files of the respective language. Otherwise it will
be read in as a stored IR. Similarly, if a formatter is selected, output is generated
according to the formatter. Otherwise it will contain a stored IR.</para>
<para>For all of the three main processors, arguments can be passed down using the
<option>-W</option>. For example, to find out what parameters are available with the
<type>Cxx</type> parser, use the <option>--help</option> option:</para>
<screen>
$ synopsis -p Cxx -h
Parameters for processor 'Synopsis.Parsers.Cxx.Parser':
profile output profile data
cppflags list of preprocessor flags such as -I or -D
preprocess whether or not to preprocess the input
...
</screen>
<para>Then, to pass a <varname>preprocess</varname> option, either of:</para>
<programlisting>synopsis -p Cxx -Wp,--preprocess ...</programlisting>
<programlisting>synopsis -p Cxx -Wp,preprocess=True ...</programlisting>
<para>The first form expects an optional string argument, while the second form
expects a python expression, thus allowing to pass python objects such as lists.
(But be careful to properly escape characters to get the expression through the
shell !)</para>
<para>But passing options via the command line has its limits, both, in terms of
usability, as well as for the robustness of the interface (all data have to be
passed as strings !). Therefor, for any tasks demanding more flexibility a
scripting interface is provided, which will be discussed in the next chapter.
</para>
</section>
<section id="parsing">
<title>Parsing Source-code</title>
<para>Let's assume a simple header file, containing some declarations:</para>
<para>
<programlisting><xi:include href="examples/Paths/src/Path.h" parse="text"/>
</programlisting>
</para>
<para>Process this with
<programlisting>synopsis -p Cxx -f HTML -o Paths Path.h</programlisting>
to generate an html document in the directory specified using the
<option>-o</option> option, i.e. <filename>Paths</filename>.
</para>
<para>
The above represents the simplest way to use <command>synopsis</command>.
A simple command is used to parse a source-file and to generate a document
from it. The parser to be used is selected using the <option>-p</option>
option, and the formatter with the <option>-f</option> option.
</para>
<para>If no formatter is specified, synopsis dumps its
<link linkend="ir">internal representation</link> to the specified output file.
Similarly, if no parser is specified, the input is interpreted as an IR dump.
Thus, the processing can be split into multiple synopsis invocations.</para>
<para>
Each processor (including parsers and formatters) provides a number of
parameters that can be set from the command line. For example the Cxx parser
has a parameter <varname>base_path</varname> to specify a prefix to be stripped
off of file names as they are stored in synopsis' internal representation.
Parser-specific options can be given that are passed through to the parser
processor. To pass such an option, use the <code>-Wp,</code> prefix.
For example, to set the parser's <varname>base_path</varname> option, use
<programlisting>synopsis -p Cxx -Wp,--base-path=<prefix> -f HTML -o Paths Path.h</programlisting>
</para>
</section>
<section id="compiler-emulation">
<title>Emulating A Compiler</title>
<para>Whenever the code to be parsed includes <emphasis>system headers</emphasis>, the parser needs
to know about their location(s), and likely also about <emphasis>system macro</emphasis> definitions
that may be in effect. For example, parsing:</para>
<programlisting>
#include <vector>
#include <string>
typedef std::vector<std::string> option_list;
</programlisting>
<para>requires the parser to know where to find the <filename>vector</filename> and <filename>string</filename>
headers.</para>
<para>Synopsis will attempt to emulate a compiler for the current programming language. By default,
<userinput>synopsis -p Cxx</userinput> will try to locate <application>c++</application> or similar, to
query system flags. However, the compiler can be specified via the <option>--emulate-compiler</option> option,
e.g. <userinput>synopsis -p Cxx -Wp,--emulate-compiler=/usr/local/gcc4/bin/g++</userinput>.</para>
<para>All languages that use the <type>Cpp</type> processor to preprocess the input accept the
<option>emulate-compiler</option> argument, and pass it down to the <type>Cpp</type> parser.
See <xref linkend="cpp-parser"/> for a detailed discussion of this process.</para>
</section>
<section id="comments">
<title>Using Comments For Documentation</title>
<para>Until now the generated document didn't contain any of the text from
comments in the source code. To do that the comments have to be translated
first. This translation consists of a filter that picks up a particular kind
of comment, for example only lines starting with "//.", or javadoc-style
comments such as "/**...*/", as well as some translator that converts the
comments into actual documentation, possibly using some inline markup, such
as Javadoc or ReST.</para>
<para>The following source code snippet contains java-style comments, with
javadoc-style markup. Further, an embedded processing instruction wants some
declarations to be grouped.
<programlisting><xi:include href="examples/Paths/src/Bezier.h" parse="text"/>
</programlisting>
</para>
<para>
The right combination of comment processing options for this code would be:
<programlisting>synopsis -p Cxx --cfilter=java --translate=javadoc -lComments.Grouper ...</programlisting>
The <option>--cfilter</option> option allows to specify a filter to select
document comments, and the <option>--translate</option> option sets the kind of markup
to expect. The <option>-l</option> option is somewhat more generic. It is a <emphasis>linker</emphasis>
to which (almost) arbitrary post-processors can be attached. Here we pass the <type>Comments.Grouper</type>
processor that injects <type>Group</type> nodes into the IR that cause the grouped declarations to
be documented together.
</para>
</section>
</chapter>
<chapter id="scripting">
<title>Scripting And Extending Synopsis</title>
<para>Often it isn't enough to provide textual options to the synopsis tool.
The processors that are at the core of the synopsis framework are highly
configurable. They can be passed simple string / integer / boolean type
parameters, but some of them are also composed of objects that could be
passed along as parameters.</para>
<para>While synopsis provides a lot of such building blocks already, you may
want to extend them by subclassing your own.</para>
<para>In all these cases scripting is a much more powerful way to let
synopsis do what you want. This chapter explains the basic design
of the framework, and demonstrates how to write scripts using the
built-in building blocks as well as user extensions</para>
<section id="asg">
<title>The ASG</title>
<para>At the core of synopsis is a representation of
the source code to be analyzed called an Abstract Semantic
Graph (ASG). Language-specific syntax gets translated into
an abstract graph of statements, annotated with all the necessary
metadata to recover the important details during further processing.</para>
<para>At this time only one particular type of statements is translated
into an ASG: declarations. This can be declarations of types, functions,
variables, etc. Attached to a declaration is a set of comments that was
found in the source code before the declaration. It is thus possible
to provide other metadata (such as code documentation) as part of these
comments. A variety of comment processors exist to extract such metadata
from comments.</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/asg.svg" format="SVG" scale="80" />
</imageobject>
<imageobject>
<imagedata fileref="images/asg.png" format="PNG" scale="80" />
</imageobject>
</mediaobject>
</section>
<section id="processor">
<title>The Processor class</title>
<!-- Talk about the Processor class design -->
<para>The Processor class is at the core of the Synopsis framework. It
is the basic building block out of which processing pipelines can be
composed.</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/processor.svg" format="SVG" scale="80" />
</imageobject>
<imageobject>
<imagedata fileref="images/processor.png" format="PNG" scale="80" />
</imageobject>
</mediaobject>
<para>The requirement that processors can be composed into a pipeline
has some important consequences for its design. The process method takes
an <varname>ir</varname> argument, which it will operate on, and then return. It is this
<varname>ir</varname> that forms the backbone of the pipeline, as it is passed along from
one processor to the next. Additionally, parameters may be passed to the
processor, such as input and output.</para>
<programlisting>def process(self, ir, **keywords):
self.set_parameters(keywords)
self.ir = self.merge_input(ir)
# do the work here...
return self.output_and_return_ir()</programlisting>
<para>Depending on the nature of the processor, it may parse the input
file as source code, or simply read it in from a persistent state. In
any case, the result of the input reading is merged in with the existing
asg.</para>
<programlisting>def process(self, ir, **keywords):
self.set_parameters(keywords)
for file in self.input:
self.ir = parse(ir, file))
return self.output_and_return_ir()</programlisting>
<para>Similarly with the output: if an output parameter is defined, the
ir may be stored in that file before it is returned. Or, if the
processor is a formatter, the output parameter may indicate the file /
directory name to store the formatted output in.</para>
<programlisting>def process(self, ir, **keywords):
self.set_parameters(keywords)
self.ir = self.merge_input(ir)
self.format(self.output)
return self.ir</programlisting>
</section>
<section id="pipeline">
<title>Composing A Pipeline</title>
<para>With such a design, processors can simply be chained together:</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/pipeline.svg" format="SVG" scale="80" />
</imageobject>
<imageobject>
<imagedata fileref="images/pipeline.png" format="PNG" scale="80" />
</imageobject>
</mediaobject>
<para>A parser creates an IR, which is passed to the linker (creating
a table of contents on the fly) which passes it further down to a
formatter.</para>
<programlisting>parser = ...
linker = ...
formatter = ...
ir = IR()
ir = parser.process(ir, input=['source.hh'])
ir = linker.process(ir)
ir = formatter.process(ir, output='html')</programlisting>
<para>And, to be a little bit more scalable, and to allow the use of
dependency tracking build tools such as make, the intermediate IRs can
be persisted into files. Thus, the above pipeline is broken up into multiple
pipelines, where the 'output' parameter of the parser is used to
point to IR stores, and the 'input' parameter of the linker/formatter
pipeline then contains a list of these IR store files.</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/pipelines.svg" format="SVG" scale="80" />
</imageobject>
<imageobject>
<imagedata fileref="images/pipelines.png" format="PNG" scale="80" />
</imageobject>
</mediaobject>
<para>Parse <filename>source1.hh</filename> and write the IR to <filename>source1.syn</filename>:</para>
<programlisting>parser.process(IR(), input = ['source1.hh'], output = 'source1.syn')</programlisting>
<para>Parse <filename>source2.hh</filename> and write the IR to <filename>source2.syn</filename>:</para>
<programlisting>parser.process(IR(), input = ['source2.hh'], output = 'source2.syn')</programlisting>
<para>Read in <filename>source1.syn</filename> and <filename>source2.syn</filename>, then link and format
into the <filename>html</filename> directory:</para>
<programlisting>formatter.process(linker.process(IR(), input = ['source1.syn', 'source2.syn']), output = 'html')</programlisting>
</section>
<section id="script">
<title>Writing your own synopsis script</title>
<para>The synopsis framework provides a function <function>process</function>
that lets you declare and expose processors as commands so they can be
used per command line:
<programlisting><xi:include href="examples/Paths/html/synopsis.py" parse="text"/>
</programlisting>
</para>
<para>With such a script <filename>synopsis.py</filename> it is possible
to call
<programlisting>python synopsis.py cxx_ssd --output=Bezier.syn Bezier.h
</programlisting>
to do the same as in <xref linkend="using"/>, but with much more
flexibility. Let's have a closer look at how this script works:</para>
<section id="importing">
<title>Importing all desired processors</title>
<para>As every conventional python script, the first thing to do is
to pull in all the definitions that are used later on, in our case
the definition of the <function>process</function> function, together
with a number of predefined processors.
</para>
</section>
<section id="composing">
<title>Composing new processors</title>
<para>As outlined in <xref linkend="pipeline"/>, processors can be
composed into pipelines, which are themselfs new (composite) processors.
Synopsis provides a <type>Composite</type> type for convenient pipeline
construction. Its constructor takes a list of processors that the
process method will iterate over.
</para>
</section>
<section id="extending">
<title>Defining New Processors</title>
<para>New processors can be defined by deriving from <type>Processor</type>
or any of its subclasses. As outlined in <xref linkend="processor"/>,
it has only to respect the semantics of the <function>process</function>
method.</para>
</section>
<section id="process">
<title>Exposing The Commands</title>
<para>With all these new processrs defined, they need to be made accessible
to be called per command line. That is done with the <function>process</function>
function. It sets up a dictionary of named processors, with which the script
can be invoked as
<programlisting>python synopsis.py joker
</programlisting>
which will invoke the joker's <function>process</function> method with
any argument that was provided passed as a named value (keyword).
</para>
</section>
</section>
</chapter>
<chapter id="processors">
<title>Processor Design</title>
<section id="python-parser">
<title>The Python Parser</title>
<para>The Python parser expects Python source files as input, and compiles them into
an Abstract Semantic Graph. Note that directory names are valid input, too, if they
correspond to Python packages, i.e. have an <filename>__init__.py</filename> file in them.
At this time, this compilation is based purely on static
analysis (parsing), and no runtime-inspection of the code is involved.</para>
<para>This obviously is obviously only of limitted use if objects change at runtime.</para>
<para>The found docstrings are identified and attached to their corresponding objects.
If a <varname>docformat</varname> specifier is provided (either in terms of a
<varname>__docformat__</varname> variable embedded into the Python source or the definition
of the parser's <varname>default_docformat</varname> parameter, this format is used to
parse and format the given docstrings.</para>
<para>Here are the available Python-Parser parameters:</para>
<variablelist>
<varlistentry>
<term>primary_file_only</term>
<listitem>
<para>If false, in addition to the primary python file imported modules are parsed,
too, if they are found.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>base_path</term>
<listitem>
<para>A prefix (directory) to strip off of the Python code filename.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>sxr_prefix</term>
<listitem>
<para>If this variable is defined, it points to a directory within which
the parser will store cross-referenced source code. This information may
be used to render the source code with cross-references during formatting.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>default_docformat</term>
<listitem>
<para>Specify the doc-string format for the given python file. By default doc-strings
are interpreted as plaintext, though other popular markup formats exist, such as
ReStructuredText (<varname>rst</varname>), or JavaDoc (<varname>javadoc</varname>)
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section id="idl-parser">
<title>The IDL Parser</title>
<para>The IDL parser parses CORBA IDL.</para>
</section>
<section id="cpp-parser">
<title>The Cpp Parser</title>
<para>The Cpp parser preprocesses IDL, C, and C++ files. As any normal preprocessor,
it will generate a file suitable as input for a C or C++ parser, i.e. it
processes include and macro statements. However, it will store the encountered
preprocessor directives in the ASG for further analysis.</para>
<para>As the list of included files may grow rather large, two mechanisms exist
to restrict the number of files for which information is retained. The <type>primary_file_only</type>
parameter is used to indicate that only the top-level file being parsed should be included.
The <type>base_path</type> parameter, on the other hand, will restrict the number files if
<type>main_file_only</type> is set to <type>False</type>. In this case, the <type>base_path</type>
is used as a prefix, and only those file whose name starts with that prefix are marked as
<type>main</type>.
</para>
<para>For each included file, a <type>SourceFile</type> object is created and added
to the parent's <type>Include</type> list. Further, all macro declarations, as well
as macro calls, are recorded. While most useful in conjunction with the C and Cxx processors,
these data can be of use stand-alone, too. For example consider a tool that reports file
dependencies based on <type>#include</type> statements. The Dot formatter (see <xref linkend="dot-formatter"/>)
can generate a file dependency graph from the Cpp processor output alone:
</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/Files.png" format="PNG" scale="80" />
</imageobject>
</mediaobject>
<para>Whenever the code to be parsed includes <emphasis>system headers</emphasis>, the parser needs
to know about their location(s), and likely also about <emphasis>system macro</emphasis> definitions
that may be in effect.</para>
<para>The <type>Cpp</type> parser provides two parameters to specify this emulation process,
<varname>emulate_compiler</varname> and <varname>compiler_flags</varname>. To illustrate their use,
let us probe for the system flags that get generated:</para>
<screen>
<prompt>> </prompt>synopsis --probe -p Cpp -Wp,--emulate-compiler=g++
Compiler: g++
Flags:
Language: C++
Header search path:
/usr/lib/gcc/i386-redhat-linux/4.1.2/../../../../include/c++/4.1.2
...
/usr/include
Macro definitions:
__STDC__=1
__cplusplus=1
...
</screen>
<para>Sometimes it isn't enough to have the compiler name itself. Some flags may modify the header search path,
or the predefined macros. For example, GCC can be instructed not to consider system headers at all:</para>
<screen>
<prompt>> </prompt>synopsis --probe -p Cpp -Wp,--emulate-compiler=g++ -Wp,compiler-flags=[\"-nostdinc\"]
Compiler: g++
Flags: -nostdinc
Language: C++
Header search path:
Macro definitions:
__STDC__=1
__cplusplus=1
...
</screen>
<para>Here, the set of predefined header search paths is empty. Note, that the
<option>--compiler-flags</option> option (which, as you may remember, maps to the
<varname>compiler_flags</varname> processor parameter) expects a (Python) list.
Therefor, we use the form without the leading dashes, so we can pass Python code
as argument (See <xref linkend="options"/> for details), with appropriate quoting.</para>
<para>For details about the parameters see <xref linkend="Cpp-Parser-ref"/>.</para>
</section>
<section id="cc-parser">
<title>The C Parser</title>
<para>The C parser parses C.</para>
<para>The C parser parses C source-code. If the <type>preprocess</type> parameter is set, it will
call the preprocessor (see <xref linkend="cpp-parser"/>). It generates an ASG containing
all declarations.</para>
</section>
<section id="cxx-parser">
<title>The Cxx Parser</title>
<para>The Cxx parser parses C++. If the <type>preprocess</type> parameter is set, it will
call the preprocessor (see <xref linkend="cpp-parser"/>). Its main purpose is to generate
an ASG containing all declarations. However, it can store more detailed information about
the source code to be used in conjunction with the HTML parser to generate a cross-referenced
view of the code. The <type>sxr_prefix</type> parameter is used to indicate the directory within
which to store information about the source files being parsed.</para>
</section>
<section id="linker">
<title>The Linker</title>
<para>The Linker recursively traverses the ASG using the Visitor
pattern, and replaces any duplicate types with their originals, and
removes duplicate declarations. References to the removed declarations
are replaced with a reference to the original.</para>
<para>There are many additional transformations that may be applied during
linking, such as the extraction of documentation strings from comments,
the filtering and renaming of symbols, regrouping of declarations based on
special annotations, etc., etc..</para>
</section>
<section id="comment-processors">
<title>Comment Processors</title>
<para>Comments are used mainly to annotate source code. These annotations may
consist of documentaton, or may contain processing instructions, to be parsed
by tools such as Synopsis.</para>
<para>Processing comments thus involves filtering out the relevant comments, parsing
their content and translating it into proper documentation strings, or otherwise perform
required actions (such as ASG transformations).</para>
<para>Here are some examples, illustrating a possible comment-processing pipeline.</para>
<section id="comment-filters">
<title>Comment Filters</title>
<para>To distinguish comments containing documentation, it is advisable to use
some convention such as using a particular prefix:</para>
<programlisting>
//. Normalize a string.
std::string normalize(std::string const &);
// float const pi;
//. Compute an area.
float area(float radius);
</programlisting>
<para>Using the <constant>ssd</constant>(read: Slash-Slash-Dot) prefix filter
instructs Synopsis only to preserve those comments that are prefixed with
<code>//.</code></para>
<screen>synopsis -p Cxx --cfilter=ssd ...</screen>
<para>Synopsis provides a number of built-in comment filters for frequent / popular
prefixes. Here are some examples:</para>
<segmentedlist>
<segtitle>Comment prefix</segtitle><segtitle>Filter class</segtitle><segtitle>option name </segtitle>
<seglistitem><seg>//</seg><seg>SSFilter</seg><seg>ss</seg></seglistitem>
<seglistitem><seg>///</seg><seg>SSSFilter</seg><seg>sss</seg></seglistitem>
<seglistitem><seg>//.</seg><seg>SSDFilter</seg><seg>ssd</seg></seglistitem>
<seglistitem><seg>/*...*/</seg><seg>CFilter</seg><seg>c</seg></seglistitem>
<seglistitem><seg>/*!...*/</seg><seg>QtFilter</seg><seg>qt</seg></seglistitem>
<seglistitem><seg>/**...*/</seg><seg>JavaFilter</seg><seg>java</seg></seglistitem>
</segmentedlist>
</section>
<section id="comment-translators">
<title>Comment Translators</title>
<para>Once all irrelevant comments have been stripped off, the remainder needs to be
transformed into proper documentation. As the actual formatting can only be performed
during formatting (at which time the output medium and format is known), there are still
things that can be done at this time: Since in general it isn't possible to auto-detect
what kind of markup is used, a translator assists in mapping stripped comment strings to
doc-strings, to which a <varname>markup</varname> specifier is attached. While this specifier
is arbitrary, the only two values supported by the HTML and DocBook formatters are
<varname>javadoc</varname> and <varname>rst</varname>(for ReStructuredText).</para>
<para>Note that this comment translation is specific to some programming languages
(such as C, C++, and IDL). Notably Python does provide a built-in facility to associate
doc-strings to declarations. (In addition, the doc-string markup can be expressed via
special-purpose variable <varname>__docformat__</varname> embedded into Python source code.</para>
</section>
<section id="comment-transformers">
<title>Transformers</title>
<para>In addition to the manipulation of the comments themselves, there are actions that may
be performed as a result of <emphasis>processing-instructions</emphasis> embedded into comments.
</para>
<para>For example, A <type>Grouper</type> transformer groups declarations together, based on
special syntax:</para>
<programlisting>
/** @group Manipulators {*/
/**
* Add a new control point.
* @param p A point
*/
void add_control_point(const Vertex &);
/**
* Remove the control point at index i.
* @param i An index
*/
void remove_control_point(size_t i);
/** }*/
virtual void draw();
</programlisting>
<para>To process the above <code>@group</code> processing-instruction,
run <userinput>synopsis -p Cxx --cfilter=java -l Grouper ...</userinput></para>
</section>
</section>
<section id="dump-formatter">
<title>The Dump Formatter</title>
<para>The Dump formatter's main goal is to provide a format
that is as close to the ASG tree, is easily browsable to the
naked eye, and provides the means to do validation or other
analysis.</para>
<para>It generates an xml tree that can be browsed via mozilla (it
uses a stylesheet for convenient display), or it can be analyzed
with some special tools using xpath expressions.</para>
<para>It is used right now for all unit tests.</para>
</section>
<section id="docbook-formatter">
<title>The DocBook Formatter</title>
<para>The DocBook formatter allows to generate a DocBook section from the given ASG.</para>
<para>Here are the most important parameters:</para>
<variablelist>
<varlistentry>
<term>title</term>
<listitem>
<para>The title to be used for the toplevel section.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>nested_modules</term>
<listitem>
<para>True if nested modules are to be formatted to nested sections.
If False, modules are flattened and formatted in sibling sections.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>generate_summary</term>
<listitem>
<para>If True, generate a <emphasis>summary</emphasis> section for each scope,
followed by a <emphasis>details</emphasis> section.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>with_inheritance_graph</term>
<listitem>
<para>If True, generate <constant>SVG</constant> and <constant>PNG</constant>
inheritance graphs for all classes. (use the <option>graph_color</option> option
to set the background color of the graph nodes)</para>
</listitem>
</varlistentry>
<varlistentry>
<term>secondary_index_terms</term>
<listitem>
<para>If True, add <varname>secondary</varname> entries in
<varname>indexterms</varname>, with the
fully qualified names. This is useful for disambiguation when the same
unqualified-id is used in multiple scopes.</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section id="dot-formatter">
<title>The Dot Formatter</title>
<para>The Dot formatter can generate graphs for various types and output formats.
Among the supported output formats are <type>png</type>, <type>svg</type>, and
<type>html</type>.</para>
<para>A typical use is the generation of UML class (inheritance and aggregation)
diagrams:</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/Classes3.png" format="PNG" scale="80" />
</imageobject>
</mediaobject>
<para>But it can also be used to generate a graphical representation of file
inclusions:</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/Files.png" format="PNG" scale="80" />
</imageobject>
</mediaobject>
</section>
<section id="html-formatter">
<title>The HTML Formatter</title>
<para>The HTML formatter generates html output. It is designed
in a modular way, to let users customize in much detail how
to format the data. All output is organized by a set of
<emphasis>views</emphasis>, which highlight different aspects of data.
Some views show the file / directory layout, others group declarations by
scopes, or provide an annotated (and cross-referenced) source view.</para>
<para>By default the formatter generates its output using frames. The views
are formatter parameters. <varname>index</varname> is a list of views that
fill the upper-left index frame. <varname>detail</varname> is a list of
views for the lower-left detail frame, and <varname>content</varname>
sets all the views for the main content frame.</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/html-screenshot.png" format="PNG"
align="center" scale="80" />
</imageobject>
</mediaobject>
<para>When the <varname>index</varname> and <varname>detail</varname> arguments
are empty lists, non-framed html will be generated.</para>
<para>Here are the most important <type>View</type> types:</para>
<variablelist>
<varlistentry>
<term>Scope</term>
<listitem>
<para>The most important view for documentation purposes is doubtless the
<type>Scope</type> view. It presents all declaration in a given scope,
together with a number of references to other views if appropriate.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>InheritanceGraph</term>
<listitem>
<para>A UML-like inheritance diagram for all classes.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>NameIndex</term>
<listitem>
<para>A global index of all declared names (macros, variables, types, ...)</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Source</term>
<listitem>
<para>A cross-referenced view of a source file.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>XRef</term>
<listitem>
<para>A listing of symbols with links to their documentation, definition, and reference.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>FileDetails</term>
<listitem>
<para>Shows details about a given file, such as what other files are included, what declarations it contains, etc.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Directory</term>
<listitem>
<para>Presents a directory (of source files). This is typically used
in conjunction with the Source view above.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>FileTree</term>
<listitem>
<para>A javascript-based file tree view suitable for the index frame
for navigation.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>ModuleTree</term>
<listitem>
<para>A javascript-based module tree view suitable for the index frame
for navigation.</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section id="sxr-formatter">
<title>The SXR Formatter</title>
<para>The SXR formatter is a variant of the HTML formatter. However, as its
focus is not so much documentation as code navigation, there are a number
of important differences. Its default set of views is different, and instead
of displaying listings of all identifiers on static html, it loads a database
of (typed) identifiers and provides an interface to query them.</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/sxr-screenshot.png" format="PNG"
align="center" scale="80" />
</imageobject>
</mediaobject>
<para>It is to be used with an http server, either a default http server
such as apache in conjunction with the <emphasis>sxi.cgi</emphasis> script
that is part of Synopsis, or by using the <emphasis>sxr-server</emphasis>
program. The latter performs better, as the database is kept in-process,
while in case of sxi.cgi it needs to be reloaded on each query.</para>
</section>
</chapter>
<appendix id="executable">
<title>Description of program options for the synopsis executable</title>
<title>The synopsis executable</title>
<para>The synopsis executable is a little convenience frontend
to the larger Synopsis framework consisting of IR-related
types as well as <emphasis>Processor</emphasis> classes.</para>
<para>While the full power of synopsis is available through
scripting (see <xref linkend="scripting" />), it is possible
to quickly generate simple documentation by means of an
easy-to-use executable, that is nothing more but a little
script with some extra command line argument parsing.</para>
<para>This tool has three processor types it can call:</para>
<variablelist>
<varlistentry>
<term>Parser</term>
<listitem>
<para>A processor that will parse source code into an
internal abstract semantic graph (ASG). Various Parsers
have a variety of parameters to control how exactly
they do that.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Linker</term>
<listitem>
<para>A processor that will remove duplicate symbols,
forward declarations, and apply any number of ASG
manipulations you want. The user typically specifies
what sub-processors to load to run from the linker.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Formatter</term>
<listitem>
<para>A processor that generates some form of formatted
output from an existing ASG, typically html, docbook xml,
or class graphs. Other formatters exist to assist debugging,
such as a <type>List</type> formatter that prints specific
aspects of the IR to stdout, or a <type>Dump</type> formatter
that writes the IR to an xml file, useful for unit testing.</para>
</listitem>
</varlistentry>
</variablelist>
<para>You can run synopsis with a single processor, for example
to parse a C++ file <filename>source.hh</filename> and store
the ASG into a file <filename>source.syn</filename>, or you can
combine it directly with linker and or formatter to generate
the output you want in a single call.</para>
<para>While the document generation in a single call is convenient,
for larger projects it is much more sensible to integrate the
document generation into existing build systems and let the build
system itself manage the dependencies between the intermediate files
and the source files.</para>
<para>For example, a typical Makefile fragment that contains the rules
to generate documentation out of multiple source files may look like
this:</para>
<programlisting>
hdr := $(wildcard *.h)
syn := $(patsubst %.h, %.syn, $(hdr))
html: $(syn)
synopsis -f HTML -o $@ $<
%.syn: %.h
synopsis -p Cxx -I../include -o $@ $<
</programlisting>
<para>Here is a listing of the most important available options:</para>
<variablelist>
<varlistentry>
<term>-h</term>
<term>--help</term>
<listitem>
<para>print out help message</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-V</term>
<term>--version</term>
<listitem>
<para>print out version info and exit</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-v</term>
<term>--verbose</term>
<listitem>
<para>operate verbosely</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-d</term>
<term>--debug</term>
<listitem>
<para>operate in debug mode</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-o</term>
<term>--output</term>
<listitem>
<para>output file / directory</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-p</term>
<term>--parser</term>
<listitem>
<para>select a parser</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-l</term>
<term>--link</term>
<listitem>
<para>link</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-f</term>
<term>--formatter</term>
<listitem>
<para>select a formatter</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-I</term>
<listitem>
<para>set an include search path</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-D</term>
<listitem>
<para>specify a macro for the parser</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-W</term>
<listitem>
<para>pass down additional arguments to a processor.
For example '-Wp,-I.' sends the '-I.' option to the
parser.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>--cfilter</term>
<listitem>
<para>Specify a comment filter (See <xref linkend="comment-filters"/>).</para>
</listitem>
</varlistentry>
<varlistentry>
<term>--translate</term>
<listitem>
<para>Translate comments to doc-strings, using the given markup specifier (See <xref linkend="comment-translators"/>).</para>
</listitem>
</varlistentry>
<varlistentry>
<term>--sxr-prefix</term>
<listitem>
<para>Specify the directory under which to store SXR info for the parsed source files.
This causes parsers to generate SXR info, the linker to generate an sxr Symbol Table,
and for the HTML formatter this causes Source and XRef views to be generated.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>--probe</term>
<listitem>
<para>This is useful in conjunction with the <option>-p Cpp</option> option to probe
for system header search paths and system macro definitions. (See <xref linkend="compiler-emulation"/>).</para>
</listitem>
</varlistentry>
</variablelist>
</appendix>
<appendix id="processor-listing"><title>Listing of some Processors and their parameters</title>
<para>
This is a listing of all processors with their respective parameters
that can be set as described in <xref linkend="script" />.
</para>
<xi:include href="Synopsis.Parsers.Python.Parser.xml">
<xi:fallback>The Python parser reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Parsers.IDL.Parser.xml">
<xi:fallback>The IDL parser reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Parsers.Cpp.Parser.xml">
<xi:fallback>The Cpp parser reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Parsers.C.Parser.xml">
<xi:fallback>The C parser reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Parsers.Cxx.Parser.xml">
<xi:fallback>The Cxx parser reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Processors.Linker.xml">
<xi:fallback>The Linker reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Processors.MacroFilter.xml">
<xi:fallback>The MacroFilter reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Processors.Comments.Filter.xml">
<xi:fallback>The Comments.Filter reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Processors.Comments.Translator.xml">
<xi:fallback>The Comments.Translator reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Formatters.Dot.Formatter.xml">
<xi:fallback>The Dot formatter reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Formatters.Dump.Formatter.xml">
<xi:fallback>The Dump formatter reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Formatters.DocBook.Formatter.xml">
<xi:fallback>The DocBook formatter reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Formatters.Texinfo.Formatter.xml">
<xi:fallback>The Texinfo formatter reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Formatters.HTML.Formatter.xml">
<xi:fallback>The HTML formatter reference...</xi:fallback>
</xi:include>
<xi:include href="Synopsis.Formatters.SXR.Formatter.xml">
<xi:fallback>The SXR formatter reference...</xi:fallback>
</xi:include>
</appendix>
<appendix id="markup">
<title>Supported Documentation Markup</title>
<para>Synopsis can handle a variety of documentation markup through
markup-formatter plugins. The most frequently used markup types are
built into the framework, and are available via the
<command>synopsis</command> applet. These are <varname>Javadoc</varname>
(available as <option>--translate=javadoc</option>), and
<varname>ReStructuredText</varname> (available as either
<option>--translate=rst</option> or <option>--translate=reStructuredText</option>).</para>
<section id="javadoc">
<title>Javadoc</title>
<para>Synopsis provides support for Javadoc-style markup (See
<ulink url="http://java.sun.com/j2se/1.5.0/docs/tooldocs/solaris/javadoc.html"/>).
However, as Javadoc is very HTML-centric, best results will only be achieved when
HTML is the only output-medium.</para>
<para>Javadoc comments consist of a main description, followed by tag blocks. Tag blocks
are of the form <code>@tag</code>. The following block tags are recognized:</para>
<simplelist type="inline">
<member>author</member>
<member>date</member>
<member>deprecated</member>
<member>exception</member>
<member>invariant</member>
<member>keyword</member>
<member>param</member>
<member>postcondition</member>
<member>precondition</member>
<member>return</member>
<member>see</member>
<member>throws</member>
<member>version</member>
</simplelist>
<para>All blocks may contain any of the following inline tags, which are of the
form <code>{@inlinetag}</code>:</para>
<simplelist type="inline">
<member>link</member>
<member>code</member>
<member>literal</member>
</simplelist>
<para>Link targets may be text, or HTML anchor elements. In case of text Synopsis interprets
the it as a name-id and attempts to look it up in its symbol table.</para>
<para>All of the above tags are recognized and translated properly for both, the <type>HTML</type>
as well as the <type>DocBook</type> formatters. Javadoc recommends to use <code>HTML</code>
markup for additional document annotation. This is only supported with the <type>HTML</type>
formatter, however.</para>
<example><title>C++ code snippet using Javadoc-style comments.</title>
<programlisting>
/**
* The Bezier class. It implements a Bezier curve
* for the given order. See {@link Nurbs} for an alternative
* curved path class. Example usage of the Bezier class:
* <pre>
* Bezier&lt;2&gt; bezier;
* bezier.add_control_point(Vertex(0., 0.));
* bezier.add_control_point(Vertex(0., 1.));
* ...
* </pre>
*
* @param Order The order of the Bezier class.
* @see <a href="http://en.wikipedia.org/wiki/Bezier"/>
*/
template <size_t Order>
class Bezier : public Path
{
...
</programlisting>
</example>
</section>
<section id="rest">
<title>ReStructured Text</title>
<para>Synopsis supports the full set of ReStructuredText markup (See
<ulink url="http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html"/>).
In order to process ReST docstrings, docutils 0.4 or higher must be installed.
If Docutils is not installed, ReST docstrings will be rendered as plaintext.</para>
<para>ReST provides a wide variety of markup that allows documentation strings to be
formatted in a wide variety of ways. Among the many features are different list styles,
tables, links, verbatim blocks, etc.</para>
<para><ulink url="http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#interpreted-text">Interpreted text</ulink>
is used to mark up program identifiers, such as the names of variables, functions, classes, and modules. Synopsis will
attempt to look them up in its symbol table, and generate suitable cross-references.</para>
<example><title>C++ code snippet using ReST-style comments.</title>
<programlisting>
//. The Nurbs class. It implements a nurbs curve
//. for the given order. It is a very powerful
//. and flexible curve representation. For simpler
//. cases you may prefer to use a `Paths::Bezier` curve.
//.
//. While non-rational curves are not sufficient to represent a circle,
//. this is one of many sets of NURBS control points for an almost uniformly
//. parameterized circle:
//.
//. +--+----+-------------+
//. |x | y | weight |
//. +==+====+=============+
//. |1 | 0 | 1 |
//. +--+----+-------------+
//. |1 | 1 | `sqrt(2)/2` |
//. +--+----+-------------+
//. |0 | 1 | 1 |
//. +--+----+-------------+
//. |-1| 1 | `sqrt(2)/2` |
//. +--+----+-------------+
//. |-1| 0 | 1 |
//. +--+----+-------------+
//. |-1| -1 | `sqrt(2)/2` |
//. +--+----+-------------+
//. |0 | -1 | 1 |
//. +--+----+-------------+
//. |1 | -1 | `sqrt(2)/2` |
//. +--+----+-------------+
//. |1 | 0 | 1 |
//. +--+----+-------------+
//.
//. The order is three, the knot vector is {0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 4}.
//. It should be noted that the circle is composed of four quarter circles,
//. tied together with double knots. Although double knots in a third order NURBS
//. curve would normally result in loss of continuity in the first derivative,
//. the control points are positioned in such a way that the first derivative is continuous.
//. (From Wikipedia_ )
//.
//. .. _Wikipedia: http://en.wikipedia.org/wiki/NURBS
//.
//. Example::
//.
//. Nurbs<3> circle;
//. circle.insert_control_point(0, Vertex(1., 0.), 1.);
//. circle.insert_control_point(0, Vertex(1., 1.), sqrt(2.)/2.);
//. ...
//.
</programlisting>
</example>
<para>To see how this is formatted please refer to the
<ulink url="http://synopsis.fresco.org/docs/examples/index.html#docbook">DocBook example</ulink>.</para>
</section>
</appendix>
</book>
|