1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116
|
"""
This module adds gradient optimization capabilities to the
standard genomes. Basically this means that any problem set up
for a GA is automatically able to be gradient optimized...
For purity sake, the grad and genome
modules have been left totally separate. It might have
been just as easy to derive genomes directly from
grad.grad - and maybe that will happen in the future.
Caveats:
This has only be set up for list_genomes made up of
floating point genes. The tree_genomes just need to
be recoded here translating the pick_numbers functions
from tree_opt.
genomes of discrete variable genes should be able to work also.
"""
import grad
import genome
class list_genome(genome.list_genome,grad.grad):
""" So far, grad_min, and grad_max only
work for float_genes.
Test:
#Test gradient optimization
>>> import ga_gnm, gene
>>> g = gene.float_gene((-1,1))
>>> class simple_genome(ga_gnm.list_genome):
... def performance(self):
... s = 0
... for i in self: s = s+ i
... return s
>>> a = simple_genome(g.replicate(10))
>>> a.initialize()
>>> a.grad_opt(5)
33
>>> a
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
"""
def grad_params(self):
return self.get_values() # calls list__genome get_values()
def set_grad_params(self,x):
self.set_values(x) # calls list__genome set_values()
#do we really need this?
def grad_len(self):
return len(self)
def grad_min(self):
gmin = []
for flt_gene in self:
gmin.append(flt_gene.bounds[0])
return gmin
def grad_max(self):
gmax = []
for flt_gene in self:
gmax.append(flt_gene.bounds[1])
return gmax
|