1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139
|
>
> > I am not clear on what your algorithm is; could you be more explicit?
>
> Perhaps that is because my previous explanation was shoddy.
>
> > Also, I do not see that you have given an argument about the difficulty
> > of breaking it in a reasonable length of time.
>
> I will try to be more explicit on this also.
>
> I should warn you that I have NOT spent my life studying encryption, my
> main expertise is in computational complexity. I would be interested
> in your comments on my algorithm, especially if you are better informed
> about encryption than I.
>
> As I understand it, a good encryption or random number generation algorithm
> must be based on a problem which is computationally difficult to solve.
> Actually, I want something stronger: it should be computationally
> difficult to even make statistical statements about the solution other
> than obvious zero-knowledge statements.
>
> Many NP-complete problems are easy to approximate or else make strong
> statistical statements about the solution (for example, 3-sat is
> easy to approximate by simulated-annealing and easy to guess most of
> the variables with high probability of correctness).
>
> I choose the following problem as my foundation:
>
> Solve a set of N simultaneous 2nd degree equations in N variables
> in MOD 2. Every variable is a bit which can take on only the values
> 0 and 1.
>
> Here is an example problem for N = 3:
>
> x*y+z=1
> x*z+y*z+z=0
> x+y+z=0
>
> This system has 3 solutions:
>
> x=0, y=1, z=1
> x=1, y=0, z=1
> x=1, y=1, z=0
>
> The general problem is easily shown to be NP-complete (reduction from SAT).
>
> A NxN system can be regarded as an N-bit to N-bit mapping. For example:
>
> x*y+z = out1
> x*z+y*z+z = out2
> x+y+z = out3
>
> x,y,z are inputs, out[i] are outputs.
>
> The outputs behave erratically as a function of the inputs and the inputs
> solution behaves erratically as a function of the outputs. Approximating
> the general problem is also NP-complete (the proof is a bit more involved).
> It seems that solving most systems requires 2^N time. The standard
> attacks for reducing the exponential growth factor -- simulated annealing
> and branch-and-bound -- are obviously useless here because of the
> erratic behavior. (Perhaps some algebraic attack might work?)
>
> For a random number generator, we set up a machine with N bits of state.
> We produce N+1 2nd order MOD 2 functions (non-sparse), perhaps using
> true physical randoms to generate the coefficients. We seed the state
> of the machine somehow. The machine iterates thus
>
> Fout(state1,state2,....,stateN) = output bit
> F1(state1,state2,....,stateN) = next state1
> F2(state1,state2,....,stateN) = next state2
> ...
> FN(state1,state2,....,stateN) = next stateN
>
> The argument that this machine is hard to break is obvious. Even if
> the adversary knows the state of the machine at time T, and the random number it
> generated, he cannot solve for the previous state of the machine without
> solving an NxN system, which seems to require 2^N time.
> The adversary doesn't know the state of the machine at any time, all he
> has is a long (say O(N)) string of random numbers it generated.
> This information is strictly less useful than knowing the state at the
> beginning of the string (say at time T)
> for solving for the state at time T-1. He must solve for the state
> of the machine at some time in order to predict the other random numbers.
>
> ------------------------------------------------
>
> For this to be a good procedure, we need that the sequence of output
> bits is not only hard to break, but random in a fairly strong sense.
> I cannot see form this that even one bit is close to equally likely
> to be 0 or 1. Also, the functions F1-FN should come close to giving
> a 1-1 mapping. In addition, this does not necessarily make it
> cryptographically strong; it should be that way both forward and
> backward.
>
> I only know of two methods in the literature claimed to have the
> desired properties. The simplest is the Shamir algorithm, which
> produces pseudo-random [0, n-1] numbers. The procedure is as follows:
>
> Let p, q be primes, n=pq. Let d and e be numbers; one way is to have
> de congruent to 1 mod p-1 and q-1, which would make them an encoding-
> decoding pair for the RSA public key cryptosystem, but this is not
> necessary. It is necessary that d and e are relatively prime to p-1
> and q-1. There are two seeds, x and y, which are between 2 and n-2.
>
> The procedure is
>
> x = x^d (mod n);
> y = y^e (mod n):
> output = x+y(mod n)
>
> The Blum-Miccaeli procedure is simpler in principle, but slower.
> It has n as above, but p and q must be congurent to 3 mod 4. It
> would be best if p and q are Sophie Germain primes (this holds for
> the previous one also), which means that (p-1)/2 and (q-1)/2 are prime.
> The seed is one number between 2 and n-2.
>
> The procedure is
>
> x - x^2 (mod n);
> ouptput the least significant bit of x
>
> They produced this procedure because the Shamir procedure did not
> produce random bits. But let z be the output of the Shamir procedure,
> and consider w=z XOR n. All bits of w (or z) after the leading one
> can be used.
>
> On the problem of complexity, I have looked into the complexity of
> generating non-uniform from uniform. It is surprisingly low, as
> I define it. Consider procedures of the following type, which I
> call infinite precision. After a procedure which terminates with
> probability one, some of the bits are output. All other bits are
> found by copying random bits. A necessary and sufficient condition
> that one exists for a given density, even in k dimensions, is that
> there is a lower semi-continuous version of the density. For most
> common densities, the expected number of bits needed to carry out
> such a procedure (there are lots of procuedures) is usually under 10.
> If you are interested, send me your snail-mail address.
>
|