1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798
|
## Data
### Reading Data Files
The first thing to consider is this: do you actually need to write a custom file reader? And if the answer is yes, the next question is: can you write the reader in as clear a way as possible? Correctness, Robustness, and Speed; pick the first two and the third can be sorted out later, _if necessary_.
A common sort of data file is the configuration file format commonly used on Unix systems. This format is often called a _property_ file in the Java world.
# Read timeout in seconds
read.timeout=10
# Write timeout in seconds
write.timeout=10
Here is a simple Lua implementation:
-- property file parsing with Lua string patterns
props = []
for line in io.lines() do
if line:find('#,1,true) ~= 1 and not line:find('^%s*$') then
local var,value = line:match('([^=]+)=(.*)')
props[var] = value
end
end
Very compact, but it suffers from a similar disease in equivalent Perl programs; it uses odd string patterns which are 'lexically noisy'. Noisy code like this slows the casual reader down. (For an even more direct way of doing this, see the next section, 'Reading Configuration Files')
Another implementation, using the Penlight libraries:
-- property file parsing with extended string functions
require 'pl'
stringx.import()
props = []
for line in io.lines() do
if not line:startswith('#') and not line:isspace() then
local var,value = line:splitv('=')
props[var] = value
end
end
This is more self-documenting; it is generally better to make the code express the _intention_, rather than having to scatter comments everywhere - comments are necessary, of course, but mostly to give the higher view of your intention that cannot be expressed in code. It is slightly slower, true, but in practice the speed of this script is determined by I/O, so further optimization is unnecessary.
### Reading Unstructured Text Data
Text data is sometimes unstructured, for example a file containing words. The `pl.input` module has a number of functions which makes processing such files easier. For example, a script to count the number of words in standard input using `import.words`:
-- countwords.lua
require 'pl'
local k = 1
for w in input.words(io.stdin) do
k = k + 1
end
print('count',k)
Or this script to calculate the average of a set of numbers using `input.numbers`:
-- average.lua
require 'pl'
local k = 1
local sum = 0
for n in input.numbers(io.stdin) do
sum = sum + n
k = k + 1
end
print('average',sum/k)
These scripts can be improved further by _eliminating loops_ In the last case, there is a perfectly good function `seq.sum` which can already take a sequence of numbers and calculate these numbers for us:
-- average2.lua
require 'pl'
local total,n = seq.sum(input.numbers())
print('average',total/n)
A further simplification here is that if `numbers` or `words` are not passed an argument, they will grab their input from standard input. The first script can be rewritten:
-- countwords2.lua
require 'pl'
print('count',seq.count(input.words()))
A useful feature of a sequence generator like `numbers` is that it can read from a string source. Here is a script to calculate the sums of the numbers on each line in a file:
-- sums.lua
for line in io.lines() do
print(seq.sum(input.numbers(line))
end
### Reading Columnar Data
It is very common to find data in columnar form, either space or comma-separated, perhaps with an initial set of column headers. Here is a typical example:
EventID Magnitude LocationX LocationY LocationZ
981124001 2.0 18988.4 10047.1 4149.7
981125001 0.8 19104.0 9970.4 5088.7
981127003 0.5 19012.5 9946.9 3831.2
...
`input.fields` is designed to extract several columns, given some delimiter (default to whitespace). Here is a script to calculate the average X location of all the events:
-- avg-x.lua
require 'pl'
io.read() -- skip the header line
local sum,count = seq.sum(input.fields {3})
print(sum/count)
`input.fields` is passed either a field count, or a list of column indices, starting at one as usual. So in this case we're only interested in column 3. If you pass it a field count, then you get every field up to that count:
for id,mag,locX,locY,locZ in input.fields (5) do
....
end
`input.fields` by default tries to convert each field to a number. It will skip lines which clearly don't match the pattern, but will abort the script if there are any fields which cannot be converted to numbers.
The second parameter is a delimiter, by default spaces. ' ' is understood to mean 'any number of spaces', i.e. '%s+'. Any Lua string pattern can be used.
The third parameter is a _data source_, by default standard input (defined by `input.create_getter`.) It assumes that the data source has a `read` method which brings in the next line, i.e. it is a 'file-like' object. As a special case, a string will be split into its lines:
> for x,y in input.fields(2,' ','10 20\n30 40\n') do print(x,y) end
10 20
30 40
Note the default behaviour for bad fields, which is to show the offending line number:
> for x,y in input.fields(2,' ','10 20\n30 40x\n') do print(x,y) end
10 20
line 2: cannot convert '40x' to number
This behaviour of `input.fields` is appropriate for a script which you want to fail immediately with an appropriate _user_ error message if conversion fails. The fourth optional parameter is an options table: `{no_fail=true}` means that conversion is attempted but if it fails it just returns the string, rather as AWK would operate. You are then responsible for checking the type of the returned field. `{no_convert=true}` switches off conversion altogether and all fields are returned as strings.
@lookup pl.data
Sometimes it is useful to bring a whole dataset into memory, for operations such as extracting columns. Penlight provides a flexible reader specifically for reading this kind of data, using the `data` module. Given a file looking like this:
x,y
10,20
2,5
40,50
Then `data.read` will create a table like this, with each row represented by a sublist:
> t = data.read 'test.txt'
> pretty.dump(t)
{{10,20},{2,5},{40,50},fieldnames={'x','y'},delim=','}
You can now analyze this returned table using the supplied methods. For instance, the method `column_by_name` returns a table of all the values of that column.
-- testdata.lua
require 'pl'
d = data.read('fev.txt')
for _,name in ipairs(d.fieldnames) do
local col = d:column_by_name(name)
if type(col[1]) == 'number' then
local total,n = seq.sum(col)
utils.printf("Average for %s is %f\n",name,total/n)
end
end
`data.read` tries to be clever when given data; by default it expects a first line of column names, unless any of them are numbers. It tries to deduce the column delimiter by looking at the first line. Sometimes it guesses wrong; these things can be specified explicitly. The second optional parameter is an options table: can override `delim` (a string pattern), `fieldnames` (a list or comma-separated string), specify `no_convert` (default is to convert), numfields (indices of columns known to be numbers, as a list) and `thousands_dot` (when the thousands separator in Excel CSV is '.')
A very powerful feature is a way to execute SQL-like queries on such data:
-- queries on tabular data
require 'pl'
local d = data.read('xyz.txt')
local q = d:select('x,y,z where x > 3 and z < 2 sort by y')
for x,y,z in q do
print(x,y,z)
end
Please note that the format of queries is restricted to the following syntax:
FIELDLIST [ 'where' CONDITION ] [ 'sort by' FIELD [asc|desc]]
Any valid Lua code can appear in `CONDITION`; remember it is _not_ SQL and you have to use `==` (this warning comes from experience.)
For this to work, _field names must be Lua identifiers_. So `read` will massage fieldnames so that all non-alphanumeric chars are replaced with underscores.
`read` can handle standard CSV files fine, although doesn't try to be a full-blown CSV parser. Spreadsheet programs are not always the best tool to process such data, strange as this might seem to some people. This is a toy CSV file; to appreciate the problem, imagine thousands of rows and dozens of columns like this:
Department Name,Employee ID,Project,Hours Booked
sales,1231,overhead,4
sales,1255,overhead,3
engineering,1501,development,5
engineering,1501,maintenance,3
engineering,1433,maintenance,10
The task is to reduce the dataset to a relevant set of rows and columns, perhaps do some processing on row data, and write the result out to a new CSV file. The `write_row` method uses the delimiter to write the row to a file; `Data.select_row` is like `Data.select`, except it iterates over _rows_, not fields; this is necessary if we are dealing with a lot of columns!
names = {[1501]='don',[1433]='dilbert'}
keepcols = {'Employee_ID','Hours_Booked'}
t:write_row (outf,{'Employee','Hours_Booked'})
q = t:select_row {
fields=keepcols,
where=function(row) return row[1]=='engineering' end
}
for row in q do
row[1] = names[row[1]]
t:write_row(outf,row)
end
`Data.select_row` and `Data.select` can be passed a table specifying the query; a list of field names, a function defining the condition and an optional parameter `sort_by`. It isn't really necessary here, but if we had a more complicated row condition (such as belonging to a specified set) then it is not generally possible to express such a condition as a query string, without resorting to hackery such as global variables.
Data does not have to come from files, nor does it necessarily come from the lab or the accounts department. On Linux, `ps aux` gives you a full listing of all processes running on your machine. It is straightforward to feed the output of this command into `data.read` and perform useful queries on it. Notice that non-identifier characters like '%' get converted into underscores:
require 'pl'
f = io.popen 'ps aux'
s = data.read (f,{last_field_collect=true})
f:close()
print(s.fieldnames)
print(s:column_by_name 'USER')
qs = 'COMMAND,_MEM where _MEM > 5 and USER=="steve"'
for name,mem in s:select(qs) do
print(mem,name)
end
I've always been an admirer of the AWK programming language; with `filter` you can get Lua programs which are just as compact:
-- printxy.lua
require 'pl'
data.filter 'x,y where x > 3'
It is common enough to have data files without headers of field names. `data.read` makes a special exception for such files if all fields are numeric. Since there are no column names to use in query expressions, you can use AWK-like column indexes, e.g. '$1,$2 where $1 > 3'. I have a little executable script on my system called `lf` which looks like this:
#!/usr/bin/env lua
require 'pl.data'.filter(arg[1])
And it can be used generally as a filter command to extract columns from data. (The column specifications may be expressions or even constants.)
$ lf '$1,$5/10' < test.dat
(As with AWK, please note the single-quotes used in this command; this prevents the shell trying to expand the column indexes. If you are on Windows, then you are fine, but it is still necessary to quote the expression in double-quotes so it is passed as one argument to your batch file.)
As a tutorial resource, have a look at `test-data.lua` in the PL tests directory for other examples of use, plus comments.
The data returned by `read` or constructed by `Data.copy_select` from a query is basically just an array of rows: `{{1,2},{3,4}}`. So you may use `read` to pull in any array-like dataset, and process with any function that expects such a implementation. In particular, the functions in `array2d` will work fine with this data. In fact, these functions are available as methods; e.g. `array2d.flatten` can be called directly like so to give us a one-dimensional list:
v = data.read('dat.txt'):flatten()
The data is also in exactly the right shape to be treated as matrices by [LuaMatrix](http://lua-users.org/wiki/LuaMatrix):
> matrix = require 'matrix'
> m = matrix(data.read 'mat.txt')
> = m
1 0.2 0.3
0.2 1 0.1
0.1 0.2 1
> = m^2 -- same as m*m
1.07 0.46 0.62
0.41 1.06 0.26
0.24 0.42 1.05
`write` will write matrices back to files for you.
Finally, for the curious, the global variable `_DEBUG` can be used to print out the actual iterator function which a query generates and dynamically compiles. By using code generation, we can get pretty much optimal performance out of arbitrary queries.
> lua -lpl -e "_DEBUG=true" -e "data.filter 'x,y where x > 4 sort by x'" < test.txt
return function (t)
local i = 0
local v
local ls = {}
for i,v in ipairs(t) do
if v[1] > 4 then
ls[#ls+1] = v
end
end
table.sort(ls,function(v1,v2)
return v1[1] < v2[1]
end)
local n = #ls
return function()
i = i + 1
v = ls[i]
if i > n then return end
return v[1],v[2]
end
end
10,20
40,50
### Reading Configuration Files
The `config` module provides a simple way to convert several kinds of configuration files into a Lua table. Consider the simple example:
# test.config
# Read timeout in seconds
read.timeout=10
# Write timeout in seconds
write.timeout=5
#acceptable ports
ports = 1002,1003,1004
This can be easily brought in using `config.read` and the result shown using `pretty.write`:
-- readconfig.lua
local config = require 'pl.config'
local pretty= require 'pl.pretty'
local t = config.read(arg[1])
print(pretty.write(t))
and the output of `lua readconfig.lua test.config` is:
{
ports = {
1002,
1003,
1004
},
write_timeout = 5,
read_timeout = 10
}
That is, `config.read` will bring in all key/value pairs, ignore # comments, and ensure that the key names are proper Lua identifiers by replacing non-identifier characters with '_'. If the values are numbers, then they will be converted. (So the value of `t.write_timeout` is the number 5). In addition, any values which are separated by commas will be converted likewise into an array.
Any line can be continued with a backslash. So this will all be considered one line:
names=one,two,three, \
four,five,six,seven, \
eight,nine,ten
Windows-style INI files are also supported. The section structure of INI files translates naturally to nested tables in Lua:
; test.ini
[timeouts]
read=10 ; Read timeout in seconds
write=5 ; Write timeout in seconds
[portinfo]
ports = 1002,1003,1004
The output is:
{
portinfo = {
ports = {
1002,
1003,
1004
}
},
timeouts = {
write = 5,
read = 10
}
}
You can now refer to the write timeout as `t.timeouts.write`.
As a final example of the flexibility of `config.read`, if passed this simple comma-delimited file
one,two,three
10,20,30
40,50,60
1,2,3
it will produce the following table:
{
{ "one", "two", "three" },
{ 10, 20, 30 },
{ 40, 50, 60 },
{ 1, 2, 3 }
}
`config.read` isn't designed to read all CSV files in general, but intended to support some Unix configuration files not structured as key-value pairs, such as '/etc/passwd'.
This function is intended to be a Swiss Army Knife of configuration readers, but it does have to make assumptions, and you may not like them. So there is an optional extra parameter which allows some control, which is table that may have the following fields:
{
variablilize = true,
convert_numbers = true,
trim_space = true,
list_delim = ','
}
`variablilize` is the option that converted `write.timeout` in the first example to the valid Lua identifier `write_timeout`. If `convert_numbers` is true, then an attempt is made to convert any string that starts like a number. `trim_space` ensures that there is no starting or trailing whitespace with values, and `list_delim` is the character that will be used to decide whether to split a value up into a list (it may be a Lua string pattern such as '%s+'.)
For instance, the password file in Unix is colon-delimited:
t = config.read('/etc/passwd',{list_delim=':'})
This produces the following output on my system (only last two lines shown):
{
...
{
"user",
"x",
"1000",
"1000",
"user,,,",
"/home/user",
"/bin/bash"
},
{
"sdonovan",
"x",
"1001",
"1001",
"steve donovan,28,,",
"/home/sdonovan",
"/bin/bash"
}
}
You can get this into a more sensible format, where the usernames are the keys, with:
t = tablex.pairmap(function(k,v) return v,v[1] end,t)
and you get:
{ ...
sdonovan = {
"sdonovan",
"x",
"1001",
"1001",
"steve donovan,28,,",
"/home/sdonovan",
"/bin/bash"
}
...
}
<a id="lexer"/>
### Lexical Scanning
Although Lua's string pattern matching is very powerful, there are times when something more powerful is needed. `pl.lexer.scan` provides lexical scanners which _tokenizes_ a string, classifying tokens into numbers, strings, etc.
> lua -lpl
Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio
> tok = lexer.scan 'alpha = sin(1.5)'
> = tok()
iden alpha
> = tok()
= =
> = tok()
iden sin
> = tok()
( (
> = tok()
number 1.5
> = tok()
) )
> = tok()
(nil)
The scanner is a function, which is repeatedly called and returns the _type_ and _value_ of the token. Recognized basic types are 'iden','string','number', and 'space'. and everything else is represented by itself. Note that by default the scanner will skip any 'space' tokens.
'comment' and 'keyword' aren't applicable to the plain scanner, which is not language-specific, but a scanner which understands Lua is available. It recognizes the Lua keywords, and understands both short and long comments and strings.
> for t,v in lexer.lua 'for i=1,n do' do print(t,v) end
keyword for
iden i
= =
number 1
, ,
iden n
keyword do
A lexical scanner is useful where you have highly-structured data which is not nicely delimited by newlines. For example, here is a snippet of a in-house file format which it was my task to maintain:
points (818344.1,-20389.7,-0.1),(818337.9,-20389.3,-0.1),(818332.5,-20387.8,-0.1)
,(818327.4,-20388,-0.1),(818322,-20387.7,-0.1),(818316.3,-20388.6,-0.1)
,(818309.7,-20389.4,-0.1),(818303.5,-20390.6,-0.1),(818295.8,-20388.3,-0.1)
,(818290.5,-20386.9,-0.1),(818285.2,-20386.1,-0.1),(818279.3,-20383.6,-0.1)
,(818274,-20381.2,-0.1),(818274,-20380.7,-0.1);
Here is code to extract the points using `pl.lexer`:
-- assume 's' contains the text above...
local lexer = require 'pl.lexer'
local expecting = lexer.expecting
local append = table.insert
local tok = lexer.scan(s)
local points = {}
local t,v = tok() -- should be 'iden','points'
while t ~= ';' do
c = {}
expecting(tok,'(')
c.x = expecting(tok,'number')
expecting(tok,',')
c.y = expecting(tok,'number')
expecting(tok,',')
c.z = expecting(tok,'number')
expecting(tok,')')
t,v = tok() -- either ',' or ';'
append(points,c)
end
The `expecting` function grabs the next token and if the type doesn't match, it throws an error. (`pl.lexer`, unlike other PL libraries, raises errors if something goes wrong, so you should wrap your code in `pcall` to catch the error gracefully.)
The scanners all have a second optional argument, which is a table which controls whether you want to exclude spaces and/or comments. The default for `lexer.lua` is `{space=true,comments=true}`. There is a third optional argument which determines how string and number tokens are to be processsed.
The ultimate highly-structured data is of course, program source. Here is a snippet from 'text-lexer.lua':
require 'pl'
lines = [[
for k,v in pairs(t) do
if type(k) == 'number' then
print(v) -- array-like case
else
print(k,v)
end
end
]]
ls = List()
for tp,val in lexer.lua(lines,{space=true,comments=true}) do
assert(tp ~= 'space' and tp ~= 'comment')
if tp == 'keyword' then ls:append(val) end
end
test.asserteq(ls,List{'for','in','do','if','then','else','end','end'})
Here is a useful little utility that identifies all common global variables found in a lua module:
-- testglobal.lua
require 'pl'
local txt,err = utils.readfile(arg[1])
if not txt then return print(err) end
local globals = List()
for t,v in lexer.lua(txt) do
if t == 'iden' and _G[v] then
globals:append(v)
end
end
pretty.dump(seq.count_map(globals))
Rather then dumping the whole list, with its duplicates, we pass it through `seq.count_map` which turns the list into a table where the keys are the values, and the associated values are the number of times those values occur in the sequence. Typical output looks like this:
{
type = 2,
pairs = 2,
table = 2,
print = 3,
tostring = 2,
require = 1,
ipairs = 4
}
You could further pass this through `tablex.keys` to get a unique list of symbols. This can be useful when writing 'strict' Lua modules, where all global symbols must be defined as locals at the top of the file.
For a more detailed use of `lexer.scan`, please look at `testxml.lua` in the examples directory.
### XML
New in the 0.9.7 release is some support for XML. This is a large topic, and Penlight does not provide a full XML stack, which is properly the task of a more specialized library.
#### Parsing and Pretty-Printing
The semi-standard XML parser in the Lua universe is [lua-expat](). In particular, it has a function called `lxp.lom.parse` which will parse XML into the Lua Object Model (LOM) format. However, it does not provide a way to convert this data back into XML text. `xml.parse` will use this function, _if_ `lua-expat` is available, and otherwise switches back to a pure Lua parser originally written by Roberto Ierusalimschy.
The resulting document object knows how to render itself as a string, which is useful for debugging:
> d = xml.parse "<nodes><node id='1'>alice</node></nodes>"
> = d
<nodes><node id='1'>alice</node></nodes>
> pretty.dump (d)
{
{
"alice",
attr = {
"id",
id = "1"
},
tag = "node"
},
attr = {
},
tag = "nodes"
}
Looking at the actual shape of the data reveals the structure of LOM:
* every element has a `tag` field with its name
* plus a `attr` field which is a table containing the attributes as fields, and also as an array. It is always present.
* the children of the element are the array part of the element, so `d[1]` is the first child of `d`, etc.
It could be argued that having attributes also as the array part of `attr` is not essential (you generally cannot depend on attribute order in XML) but that's how it goes with this standard.
`lua-expat` is another _soft dependency_ of Penlight; generally, the fallback parser is good enough for straightforward XML as is commonly found in configuration files, etc. `doc.basic_parse` is not intended to be a proper conforming parser (it's only sixty lines) but it handles simple kinds of documents that do not have comments or DTD directives. It is intelligent enough to ignore the `<?xml` directive and that is about it.
You can get pretty-printing by explicitly calling `xml.tostring` and passing it the initial indent and the per-element indent:
> = xml.tostring(d,'',' ')
<nodes>
<node id='1'>alice</node>
</nodes>
There is a fourth argument which is the _attribute indent_:
> a = xml.parse "<frodo name='baggins' age='50' type='hobbit'/>"
> = xml.tostring(a,'',' ',' ')
<frodo
type='hobbit'
name='baggins'
age='50'
/>
#### Parsing and Working with Configuration Files
It's common to find configurations expressed with XML these days. It's straightforward to 'walk' the LOM data and extract the data in the form you want:
require 'pl'
local config = [[
<config>
<alpha>1.3</alpha>
<beta>10</beta>
<name>bozo</name>
</config>
]]
local d,err = xml.parse(config)
local t = {}
for item in d:childtags() do
t[item.tag] = item[1]
end
pretty.dump(t)
--->
{
beta = "10",
alpha = "1.3",
name = "bozo"
}
The only gotcha is that here we must use the `Doc:childtags` method, which will skip over any text elements.
A more involved example is this excerpt from `serviceproviders.xml`, which is usually found at `/usr/share/mobile-broadband-provider-info/serviceproviders.xml` on Debian/Ubuntu Linux systems.
d = xml.parse [[
<serviceproviders format="2.0">
<country code="za">
<provider>
<name>Cell-c</name>
<gsm>
<network-id mcc="655" mnc="07"/>
<apn value="internet">
<username>Cellcis</username>
<dns>196.7.0.138</dns>
<dns>196.7.142.132</dns>
</apn>
</gsm>
</provider>
<provider>
<name>MTN</name>
<gsm>
<network-id mcc="655" mnc="10"/>
<apn value="internet">
<dns>196.11.240.241</dns>
<dns>209.212.97.1</dns>
</apn>
</gsm>
</provider>
<provider>
<name>Vodacom</name>
<gsm>
<network-id mcc="655" mnc="01"/>
<apn value="internet">
<dns>196.207.40.165</dns>
<dns>196.43.46.190</dns>
</apn>
<apn value="unrestricted">
<name>Unrestricted</name>
<dns>196.207.32.69</dns>
<dns>196.43.45.190</dns>
</apn>
</gsm>
</provider>
<provider>
<name>Virgin Mobile</name>
<gsm>
<apn value="vdata">
<dns>196.7.0.138</dns>
<dns>196.7.142.132</dns>
</apn>
</gsm>
</provider>
</country>
</serviceproviders>
]]
Getting the names of the providers per-country is straightforward:
local t = {}
for country in d:childtags() do
local providers = {}
t[country.tag] = providers
for provider in country:childtags() do
table.insert(providers,provider:child_with_name('name'):get_text())
end
end
pretty.dump(t)
-->
{
country = {
"Cell-c",
"MTN",
"Vodacom",
"Virgin Mobile"
}
}
#### Generating XML with 'xmlification'
This feature is inspired by the `htmlify` function used by [Orbit](http://keplerproject.github.com/orbit/) to simplify HTML generation, except that no function environment magic is used; the `tags` function returns a set of _constructors_ for elements of the given tag names.
> nodes, node = xml.tags 'nodes, node'
> = node 'alice'
<node>alice</node>
> = nodes { node {id='1','alice'}}
<nodes><node id='1'>alice</node></nodes>
The flexibility of Lua tables is very useful here, since both the attributes and the children of an element can be encoded naturally. The argument to these tag constructors is either a single value (like a string) or a table where the attributes are the named keys and the children are the array values.
#### Generating XML using Templates
A template is a little XML document which contains dollar-variables. The `subst` method on a document is fed an array of tables containing values for these variables. Note how the parent tag name is specified:
> templ = xml.parse "<node id='$id'>$name</node>"
> = templ:subst {tag='nodes', {id=1,name='alice'},{id=2,name='john'}}
<nodes><node id='1'>alice</node><node id='2'>john</node></nodes>
#### Extracting Data using Templates
Matching goes in the opposite direction. We have a document, and would like to extract values from it using a pattern.
A common use of this is parsing the XML result of API queries. The [(undocumented) Google Weather API](http://blog.programmableweb.com/2010/02/08/googles-secret-weather-api/) is a good example. Grabbing the result of `http://www.google.com/ig/api?weather=Johannesburg,ZA" we get something like this, after pretty-printing:
<xml_api_reply version='1'>
<weather module_id='0' tab_id='0' mobile_zipped='1' section='0' row='0' mobile_row='0'>
<forecast_information>
<city data='Johannesburg, Gauteng'/>
<postal_code data='Johannesburg,ZA'/>
<latitude_e6 data=''/>
<longitude_e6 data=''/>
<forecast_date data='2010-10-02'/>
<current_date_time data='2010-10-02 18:30:00 +0000'/>
<unit_system data='US'/>
</forecast_information>
<current_conditions>
<condition data='Clear'/>
<temp_f data='75'/>
<temp_c data='24'/>
<humidity data='Humidity: 19%'/>
<icon data='/ig/images/weather/sunny.gif'/>
<wind_condition data='Wind: NW at 7 mph'/>
</current_conditions>
<forecast_conditions>
<day_of_week data='Sat'/>
<low data='60'/>
<high data='89'/>
<icon data='/ig/images/weather/sunny.gif'/>
<condition data='Clear'/>
</forecast_conditions>
....
</weather>
</xml_api_reply>
Assume that the above XML has been read into `google`. The idea is to write a pattern looking like a template, and use it to extract some values of interest:
t = [[
<weather>
<current_conditions>
<condition data='$condition'/>
<temp_c data='$temp'/>
</current_conditions>
</weather>
]]
local res, ret = google:match(t)
pretty.dump(res)
And the output is:
{
condition = "Clear",
temp = "24"
}
The `match` method can be passed a LOM document or some text, which will be parsed first. Note that `$NUMBER` is treated specially as a numerical index, so that `$1` is the first element of the resulting array, etc.
|