1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424

# fplll #
[![Build Status](https://travisci.org/fplll/fplll.svg?branch=master)](https://travisci.org/fplll/fplll) [![codecov](https://codecov.io/gh/fplll/fplll/branch/master/graph/badge.svg)](https://codecov.io/gh/fplll/fplll)
fplll contains implementations of several lattice algorithms. The implementation relies on floatingpoint orthogonalization, and LLL [[LLL82](#LLL82)] is central to the code, hence the name.
It includes implementations of floatingpoint LLL reduction algorithms [[NS09](#NS09),[MSV09](#MSV09)], offering different speed/guarantees ratios. It contains a 'wrapper' choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible [[S10](#S10)]. In the case of the wrapper, the succession of variants is oblivious to the user.
It includes an implementation of the BKZ reduction algorithm [[SE94](#SE94)], including the BKZ2.0 [[CN11](#CN11)] improvements (extreme enumeration pruning, preprocessing of blocks, early termination). Additionally, Slide reduction [[GN08](#GN08)] and self dual BKZ [[MW16](#MW16)] are supported.
It also includes a floatingpoint implementation of the KannanFinckePohst algorithm [[K83](#K83),[FP85](#FP85)] that finds a shortest nonzero lattice vector. For the same task, the GaussSieve algorithm [[MV10](#MV10)] is also available in fplll. Finally, it contains a variant of the enumeration algorithm that computes a lattice vector closest to a given vector belonging to the real span of the lattice.
fplll is distributed under the [GNU Lesser General Public License](COPYING) (either version 2.1 of the License, or, at your option, any later version) as published by the Free Software Foundation.
## How to cite ##
@unpublished{fplll,
author = {The {FPLLL} development team},
title = {{fplll}, a lattice reduction library},
year = 2016,
note = {Available at \url{https://github.com/fplll/fplll}},
url = {https://github.com/fplll/fplll}
}
# Table of contents #
* [fplll](#fplll)
* [How to cite](#Howtocite)
* [Table of contents](#tableofcontents)
* [Compilation](#compilation)
* [Dependencies](#dependencies)
* [Required](#required), [Optional](#optional).
* [Installation](#installation)
* [Linux](#linux)
* [Windows 10](#windows10)
* [Optimization](#optimization)
* [Check](#check)
* [How to use](#howtouse)
* Programs [latticegen](#latticegen), [fplll](#fplll1), [llldiff](#llldiff), [latsieve](#latsieve).
* [How to use as a library](#howtouseasalibrary)
* [Multicore support](#multicoresupport)
* [Examples](#examples)
* [Alternative interfaces](#alternativeinterfaces)
* [Credit](#credit)
* [Maintainers](#maintainers), [Contributors](#contributors), [Acknowledgments](#acknowledgments).
* [Contributing](#contributing)
* [New releases and bug reports](#newreleasesandbugreports)
* [Bibliography](#bibliography)
# Compilation #
## Dependencies ##
### Required ###
 GNU MP 4.2.0 or higher [http://gmplib.org/](http://gmplib.org/) or MPIR 1.0.0 or higher [http://mpir.org](http://mpir.org)
 MPFR 2.3.0 or higher, COMPLETE INSTALLATION [http://www.mpfr.org/](http://www.mpfr.org/)
 autotools 2.61 or higher
 g++ 4.9.3 or higher
### Optional ###
 QD 2.3.15 or higher (a C++/Fortran90 doubledouble and quaddouble package), compile and install
the shared library (e.g. `./configure enableshared=yes`).
[http://crdlegacy.lbl.gov/~dhbailey/mpdist/](http://crdlegacy.lbl.gov/~dhbailey/mpdist/)
## Installation ##
### Linux ###
You should downloaded the source code from github and then run
./autogen.sh
which generates the `./configure` script used to configure fplll by calling the appropriate
autotools command.
Then, to compile and install type
./configure
make
make install # (as root)
If GMP, MPFR and/or MPIR are not in the `$LD_LIBRARY_PATH`, you have to point to the directories where the libraries are, with
./configure withgmp=path/to/gmp
or
./configure withmpfr=path/to/mpfr
The same philosophy applies to the (optional) QD library. If you want to use
mpir instead of gmp, use `enablempir` and `withmpir=path/to/mpir`.
You can remove the program binaries and object files from the source code directory by typing `make
clean`. To also remove the files that `./configure` created (so you can compile the package for a
different kind of computer), type `make distclean`. By default, `make install` installs the package
commands under `/usr/local/bin`, include files under `/usr/local/include`, etc. You can specify an
installation directory name other than `/usr/local` by giving `./configure` the option
`prefix=dirname`. Run `./configure help` for further details.
### Windows 10 ###
Windows 10 has a "Windows Subsystem for Linux", which essentially allows you to use Linux features in Windows without the need for a dualboot system or a virtual machine. To activate this, first go to **Settings** > **Update and security** > **For developers** and enable developer mode. (This may take a while.) Afterwards, open command prompt and run
lxrun /install
This will install the WSL, and afterwards this system can be accessed e.g. by opening command prompt and typing `bash`. With this Linuxlike subsystem, installing fplll is then similar to above, except that most likely the package repository is not up to date, and various additional packages need to be installed first. To make sure you only install the most recent software, run:
sudo aptget update
Then run `sudo aptget install <packages>` for the (indirectly) required packages, such as `make`, `autoconf`, `libtool`, `gcc`, `g++`, `libgmpdev`, and `libmpfrdev`. Finally, download the fplll source code, extract the contents, navigate to this folder in Bash (commonly found under `/mnt/c/<local path>` when stored somewhere on the `C:\` drive), and run:
./autogen.sh
./configure
make
The same comments as before apply for using e.g. `make install` or `make distclean` instead of `make`.
## Check ##
Type
make check
## Optimization ##
The default compilation flag is `O3`. One may use the `march=native O3` flag to optimize the binaries. See "[this issue](https://github.com/fplll/fplll/issues/169)" for its impact on the enumeration speed.
# How to use #
Executable files `fplll` and `latticegen` are installed in the directory
`/usr/local/bin`. (Note that the programs generated by `make` in the `fplll/` directory are only wrappers to the programs in `fplll/.libs/`).
If you type `make check`, it will also generate the executable file `llldiff`,
in `fplll/.libs/`.
## latticegen ##
`latticegen` is a utility for generating matrices (rows form input
lattice basis vectors).
The options are:
* `r` `d` `b` : generates a knapsack like matrix of dimension d x (d+1) and b bits (see, e.g., [[S09](#S09)]): the ith vector starts with a random integer of bitlength <=b and the rest is the ith canonical unit vector.
* `s` `d` `b` `b2` : generates a d x d matrix of a form similar to that is involved when trying to find rational approximations to reals with the same small denominator (see, e.g., [[LLL82](#LLL82)]): the first vector starts with a random integer of bitlength <=b2 and continues with d1 independent integers of bitlengths <=b; the ith vector for i>1 is the ith canonical unit vector scaled by a factor 2^b.
* `u` `d` `b` : generates a d x d matrix whose entries are independent integers of bitlengths <=b.
* `n` `d` `b` `c` : generates an ntrulike matrix. If char is 'b', then it first samples an integer q of bitlength <=b, whereas if char is 'q', then it sets q to the provided value. Then it samples a uniform h in the ring Z_q[x]/(x^n1). It finally returns the 2 x 2 block matrix [[I, Rot(h)], [0, q*I]], where each block is d x d, the first row of Rot(h) is the coefficient vector of h, and the ith row of Rot(h) is the shift of the (i1)th (with last entry put back in first position), for all i>1. Warning: this does not produce a genuine ntru lattice with h a genuine public key (see [[HPS98](#HPS98)]).
* `N` `d` `b` `c` : as the previous option, except that the contructed matrix is [[q*I, 0], [Rot(h), I]].
* `q` `d` `k` `b` `c` : generates a qary matrix. If char is 'b', then it first samples an integer q of bitlength <=b; if char is 'p', it does the same and updates q to the smallest (probabilistic) prime that is greater; if char is 'q', then it sets q to the provided value. It returns a 2 x 2 block matrix [[I, H], [0, q*I]], where H is (dk) x k and uniformly random modulo q. These bases correspond to the SIS/LWE qary lattices (see [[MR09](#MR09)]). GoldsteinMayer lattices correspond to k=1 and q prime (see [[GM03](#GM03)]).
* `t` `d` `f` : generates a d x d lowertriangular matrix B with B_ii = 2^(di+1)^f for all i, and B_ij is uniform between B_jj/2 and B_jj/2 for all j<i.
* `T` `d` : also takes as input a ddimensional vector vec read from a file. It generates a d x d lowertriangular matrix B with B_ii = vec[i] for all i and B_ij is uniform between B_jj/2 and B_jj/2 for all j<i.
The generated matrix is printed in stdout.
Note that by default, the random bits always use the same seed, to ensure reproducibility. The seed may be changed with the option `randseed <integer>` or by using the current time (in seconds) `randseed time`. If you use this option, it must be the first one on the command line.
## fplll ##
`fplll` does LLL, BKZ, HKZ or SVP on a matrix (considered as a set of row
vectors) given in stdin or in a file as parameter.
The options are:
* `a lll` : LLLreduction (default).
* `a bkz` : BKZreduction.
* `a hkz` : HKZreduction.
* `a svp` : prints a shortest nonzero vector of the lattice.
* `a sdb` : self dual variant of BKZreduction.
* `a sld` : slide reduction.
* `a cvp` : prints the vector in the lattice closest to the input vector.
* `v` : verbose mode.
* `nolll` : does not apply to LLLreduction. In the case of bkz, hkz and svp, by default, the input basis is LLLreduced before anything else. This option allows to remove that initial LLLreduction (note that other calls to LLLreduction may occur during the execution).
Options for LLLreduction:
* `d delta` : δ (default=0.99)
* `e eta` : η (default=0.51). See [[NS09](#NS09)] for the definition of (δ,η)LLLreduced bases.
* `l lovasz` : if !=0 Lovasz's condition. Otherwise, Siegel's condition (default: Lovasz). See [[A02](#A02)] for the definition of Siegel condition.
* `f mpfr` : sets the floatingpoint type to MPFR (default if `m=proved`).
* `p precision` : precision of the floatingpoint arithmetic, works only with `f mpfr`.
* `f dd` : sets the floatingpoint type to doubledouble.
* `f qd` : sets the floatingpoint type to quaddouble.
* `f dpe` : sets the floatingpoint type to DPE (default if `m=heuristic`).
* `f double` : sets the floatingpoint type to double (default if `m=fast`).
* `f longdouble` : sets the floatingpoint type to long double.
* `z mpz` : sets the integer type to mpz, the integer type of GMP (default).
* `z int` : sets the integer type to int.
* `z long` : as `z int`.
* `z double` : sets the integer type to double.
* `m wrapper` : uses the wrapper. (default if `z=mpz`).
* `m fast` : uses the fast method, works only with `f double`.
* `m heuristic` : uses the heuristic method.
* `m proved` : uses the proved version of the algorithm.
* `y` : early reduction.
With the wrapper or the proved version, it is guaranteed that the basis is LLLreduced with δ'=2×δ1
and η'=2×η1/2. For instance, with the default options, it is guaranteed that the basis is
(0.98,0.52)LLLreduced.
Options for BKZreduction:
* `b block_size` : block size, mandatory, between 2 and the number of vectors.
* `f float_type` : same as LLL (`p` is required if `float_type=mpfr`).
* `p precision` : precision of the floatingpoint arithmetic with `f mpfr`.
* `bkzmaxloops loops` : maximum number of full loop iterations.
* `bkzmaxtime time` : stops after `time` seconds (up to completion of the current loop iteration).
* `bkzautoabort` : stops when the average slope of the log b_i*'s does not decrease fast enough.
Without any of the last three options, BKZ runs until no block has been updated for a full loop iteration.
* `s filename.json` : use strategies for preprocessing and pruning paramater (/strategies/default.json provided). Experimental.
* `bkzghbound factor` : multiplies the Gaussian heuristic by `factor` (of float type) to set the enumeration radius of the SVP calls.
* `bkzboundedlll` : restricts the LLL call before considering a block to vector indices within that block.
* `bkzdumgso file_name` : dumps the log b_i* 's in specified file.
Output formats:
* `of ` : prints new line (if `a [lllbkz]`)
* `of b` : prints the basis (if `a [lllbkz]`, this value by default)
* `of bk` : prints the basis (if `a [lllbkz]`, format compatible with sage)
* `of c` : prints the closest vector (if `a cvp`, this value by default)
* `of s` : prints the closest vector (if `a svp`, this value by default)
* `of t` : prints status (if `a [lllbkzcvpsvp]`)
* `of u` : prints unimodular matrix (if `a [lllbkz]`)
* `of uk` : prints unimodular matrix (if `a [lllbkz]`, format compatible with sage)
* `of v` : prints inverse of u (if `a lll`)
* `of vk` : prints inverse of u (if `a lll`, format compatible with sage)
A combination of these option is allowed (e.g., `of bkut`).
## llldiff ##
`llldiff` compares two bases (b1,...,bd) and (c1,...c_d'): they are considered
equal iff d=d' and for any i, bi = + ci. Concretely, if basis B is in file 'B.txt' and if basis C is in file 'C.txt' (in the fplll format), then one may run `cat B.txt C.txt  ./llldiff`.
## latsieve ##
`latsieve` does (tuple) lattice sieve on a matrix (considered as a set of row
vectors) given in a file as parameter. You may compile it by using `make latsieve`.
It will generate the executable file
`latsieve` in `fplll/` which is a wrapper of `fplll/.libs/latsieve`.
The options are:
* `a nnn` : nnn is the tuple algorithm to use (default 2 corresponding to GaussSieve)
* `f filename` : follows input matrix
* `b nnn` : BKZ preprocessing of blocksize nnn (optional)
* `t nnn` : targeted square norm for stoping sieving (optional)
* `s nnn` : using seed=nnn (optional)
* `v` : verbose toggle
## How to use as a library ##
See [API documentation](https://fplll.github.io/fplll/).
## Multicore support ##
This library does not currently use multiple cores and running multiple threads working on the same object `IntegerMatrix`, `LLLReduction`, `MatGSO` etc. is not supported. Running multiple threads working on *different* objects, however, is supported. That is, there are no global variables and it is safe to e.g. reduce several lattices in parallel in the same process.
# Examples #
1. LLL reduction
```
./latticegen r 10 1000  ./fplll
```
2. Fileinput for reduction. If the file `matrix` contains
```
[[ 10 11]
[11 12]]
```
then
```
./fplll matrix
```
produces
```
[[0 1 ]
[1 0 ]
]
```
3. Random generator
```
./latticegen randseed 1234 r 10 1000  ./fplll
./latticegen randseed time u 10 16  ./fplll
```
4. Solving SVP
```
./latticegen r 30 3000  ./fplll a svp
```
5. Solving CVP
```
echo "[[17 42 4][50 75 108][11 47 33]][100 101 102]"  ./fplll a cvp
```
# Alternative interfaces #
 [fpylll](https://github.com/malb/fpylll) is a standalone Python interface for fplll.
 fplll is included in [Sage](http://sagemath.org), see documentation for [IntegerMatrix](http://doc.sagemath.org/html/en/reference/matrices/sage/matrix/matrix_integer_dense.html) and [IntegerLattice](http://doc.sagemath.org/html/en/reference/modules/sage/modules/free_module_integer.html).
# Credit #
## Maintainers ##
fplll is currently maintained by:
 Martin Albrecht, <martinralbrecht@googlemail.com>
 Shi Bai, <shi.bai@gmail.com>
## Contributors ##
The following people have contributed to fplll:
 Martin Albrecht
 Shi Bai
 Guillaume Bonnoron
 David Cade
 Léo Ducas
 Joop van de Pol
 Xavier Pujol
 Damien Stehlé
 Marc Stevens
 Gilles Villard
 Michael Walter
Please add yourself here if you make a contribution.
## Acknowledgments ##
 Patrick Pelissier and Paul Zimmermann for `dpe`.
 David H. Bailey for `QD`.
 Sylvain Chevillard, Christoph Lauter and Gilles Villard for the `configure/make/make install` packaging.
 Timothy Abbott, Michael Abshoff, Bill Allombert, John Cannon, Sylvain Chevillard, Julien Clement, Andreas Enge, JeanPierre Flori, Laurent Fousse, Guillaume Hanrot, Jens Hermans, Jerry James, Christoph Lauter, Tancrède Lepoint, Andrew Novocin, Willem Jan Palenstijn, Patrick Pelissier, Julien Puydt, Michael Schneider, Thiemo Seufer, Allan Steel, Gilles Villard and Paul Zimmermann for their support and for many suggestions that helped debugging and improving this code.
 [CONTRIBUTING.md](CONTRIBUTING.md) is taken, almost verbatim, from https://github.com/pydanny/djangopackages/blob/master/docs/contributing.rst
 [json.hpp](fplll/io/json.hpp) is taken from https://github.com/nlohmann/json
 This project has been supported by ERC Starting Grant ERC2013StG335086LATTAC.
# Contributing #
fplll welcomes contributions. See [CONTRIBUTING.md](CONTRIBUTING.md) for details.
# New releases and bug reports #
New releases will be announced on [https://groups.google.com/forum/#!forum/fpllldevel](https://groups.google.com/forum/#!forum/fpllldevel).
Bug reports may be sent to [https://groups.google.com/forum/#!forum/fpllldevel](https://groups.google.com/forum/#!forum/fpllldevel) or via
[https://github.com/fplll/fplll/issues](https://github.com/fplll/fplll/issues).
# Bibliography #
<a name="A02">[A02]<a/> A. Akhavi. Random lattices, threshold phenomena and efficient reduction algorithms. Theor. Comput. Sci. 287(2): 359385 (2002)
<a name="Chen13">[Chen13]</a> Y. Chen, Lattice reduction and concrete security of fully homomorphic encryption.
<a name="CN11">[CN11]</a> Y. Chen and P. Q. Nguyen. BKZ 2.0: Better Lattice Security Estimates. ASIACRYPT 2011: 120
<a name="GM03">[GM03]</a> D. Goldstein and A. Mayer. On the equidistribution of Hecke points. Forum Mathematicum, 15:165–189 (2003)
<a name="GN08">[GN08]</a> N. Gama and P. Q. Nguyen. Finding Short Lattice Vectors within Mordell's Inequality. STOC 2008: 207216
<a name="GNR13">[GNR13]</a> N. Gama, P. Q. Nguyen and Oded Regev. Lattice Enumeration Using Extreme Pruning.
<a name="HPS98">[HPS98]</a> J. Hoffstein, J. Pipher, J. H. Silverman. NTRU: A RingBased Public Key Cryptosystem. ANTS 1998: 267288
<a name="K83">[K83]</a> R. Kannan. Improved algorithms for integer programming and related lattice problems. STOC 1983, 99108
<a name="FP85">[FP85]</a> U. Fincke and M. Pohst. Improved methods for calculating vectors of short length in a lattice, including a complexity analysis. Math. Comp., 44(170):463–471 (1985)
<a name="LLL82">[LLL82]</a> A. K. Lenstra, H. W. Lenstra, Jr. and L. Lovasz. Factoring polynomials with rational coefficients. Math. Ann., 261: 515–534 (1982)
<a name="MSV09">[MSV09]</a> I. Morel, D. Stehle and G. Villard. HLLL: using Householder inside LLL. ISSAC 2009: 271278
<a name="MV10">[MV10]</a> D. Micciancio and P. Voulgaris. Faster Exponential Time Algorithms for the Shortest Vector Problem. SODA 2010: 14681480
<a name="MW16">[MW16]</a> D. Micciancio and M. Walter. Practical, Predictable Lattice Basis Reduction. EUROCRYPT 2016: 820849
<a name="MR09">[MR09]</a> D. Micciancio and O. Regev. PostQuantum Cryptography. Chapter of Latticebased Cryptography, 147191 (2009)
<a name="NS09">[NS09]</a> P. Q. Nguyen and D. Stehle. An LLL Algorithm with Quadratic Complexity. SIAM J. Comput. 39(3): 874903 (2009)
<a name="S10">[S10]</a> D. Stehle. FloatingPoint LLL: Theoretical and Practical Aspects. The LLL Algorithm 2010: 179213
<a name="SE94">[SE94]</a>: C.P. Schnorr and M. Euchner. Lattice basis reduction: Improved practical algorithms and solving subset sum problems. Math. Program. 66: 181199 (1994)
