File: plot_functional_chaos_database.py

package info (click to toggle)
openturns 1.26-4
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 67,708 kB
  • sloc: cpp: 261,605; python: 67,030; ansic: 4,378; javascript: 406; sh: 185; xml: 164; makefile: 101
file content (171 lines) | stat: -rw-r--r-- 5,440 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
"""
Create a full or sparse polynomial chaos expansion
==================================================
"""

# %%
# In this example we create a global approximation of a model using
# polynomial chaos expansion based on a design of experiments.
# The goal of this example is to show how we can create a full or
# sparse polynomial chaos expansion depending on our needs
# and depending on the number of observations we have.
# In general, we should have more observations than parameters to estimate.
# This is why a sparse polynomial chaos may be interesting:
# by carefully selecting the coefficients we estimate,
# we may reduce overfitting and increase the predictions of the
# metamodel.

# %%
import openturns as ot


# %%
# Define the model
# ~~~~~~~~~~~~~~~~

# %%
# Create the function.
myModel = ot.SymbolicFunction(
    ["x1", "x2", "x3", "x4"], ["1 + x1 * x2 + 2 * x3^2 + x4^4"]
)

# %%
# Create a multivariate distribution.
distribution = ot.JointDistribution(
    [ot.Normal(), ot.Uniform(), ot.Gamma(2.75, 1.0), ot.Beta(2.5, 1.0, -1.0, 2.0)]
)

# %%
# In order to create the PCE, we can specify the distribution of the
# input parameters.
# If not known, statistical inference can be used to select a possible
# candidate, and fitting tests can validate such an hypothesis.
# Please read :doc:`Fit a distribution from an input sample </auto_surrogate_modeling/polynomial_chaos/plot_chaos_build_distribution>`
# for an example of this method.

# %%
# Create a training sample
# ~~~~~~~~~~~~~~~~~~~~~~~~

# %%
# Create a pair of input and output samples.
sampleSize = 250
inputSample = distribution.getSample(sampleSize)
outputSample = myModel(inputSample)

# %%
# Build the orthogonal basis
# ~~~~~~~~~~~~~~~~~~~~~~~~~~

# %%
# In the next cell, we create the univariate orthogonal polynomial basis
# for each marginal.
inputDimension = inputSample.getDimension()
coll = [
    ot.StandardDistributionPolynomialFactory(distribution.getMarginal(i))
    for i in range(inputDimension)
]
enumerateFunction = ot.LinearEnumerateFunction(inputDimension)
productBasis = ot.OrthogonalProductPolynomialFactory(coll, enumerateFunction)

# %%
# We can achieve the same result using :class:`~openturns.OrthogonalProductPolynomialFactory`.
marginalDistributionCollection = [
    distribution.getMarginal(i) for i in range(inputDimension)
]
multivariateBasis = ot.OrthogonalProductPolynomialFactory(
    marginalDistributionCollection
)
multivariateBasis

# %%
# Create a full PCE
# ~~~~~~~~~~~~~~~~~

# %%
# Create the algorithm.
# We compute the basis size from the total degree.
# The next lines use the :class:`~openturns.LeastSquaresStrategy` class
# with default parameters (the default is the
# :class:`~openturns.PenalizedLeastSquaresAlgorithmFactory` class).
# This creates a full polynomial chaos expansion, i.e.
# we keep all the candidate coefficients produced by the enumeration
# rule.
# In order to create a sparse polynomial chaos expansion, we
# must use the :class:`~openturns.LeastSquaresMetaModelSelectionFactory`
# class instead.
#
totalDegree = 3
candidateBasisSize = enumerateFunction.getBasisSizeFromTotalDegree(totalDegree)
print("Candidate basis size = ", candidateBasisSize)
adaptiveStrategy = ot.FixedStrategy(productBasis, candidateBasisSize)
projectionStrategy = ot.LeastSquaresStrategy()
algo = ot.FunctionalChaosAlgorithm(
    inputSample, outputSample, distribution, adaptiveStrategy, projectionStrategy
)
algo.run()
result = algo.getResult()
result

# %%
# Get the number of coefficients in the PCE.
selectedBasisSizeFull = result.getIndices().getSize()
print("Selected basis size = ", selectedBasisSizeFull)

# %%
# We see that the number of coefficients in the selected basis is
# equal to the number of coefficients in the candidate basis.
# This is, indeed, a *full* PCE.

# %%
# Use the PCE
# ~~~~~~~~~~~

# %%
# Get the metamodel function.
metamodel = result.getMetaModel()

# %%
# In order to evaluate the metamodel on a single point, we just
# use it as any other :class:`openturns.Function`.
xPoint = distribution.getMean()
yPoint = metamodel(xPoint)
print("Value at ", xPoint, " is ", yPoint)

# %%
# Based on these results, we may want to validate our metamodel.
# More details on this topic are presented in
# :doc:`Validate a polynomial chaos </auto_surrogate_modeling/polynomial_chaos/plot_chaos_draw_validation>`.

# %%
# Create a sparse PCE
# ~~~~~~~~~~~~~~~~~~~

# %%
# In order to create a sparse polynomial chaos expansion, we
# use the :class:`~openturns.LeastSquaresMetaModelSelectionFactory`
# class instead.
#
totalDegree = 6
candidateBasisSize = enumerateFunction.getBasisSizeFromTotalDegree(totalDegree)
print("Candidate basis size = ", candidateBasisSize)
adaptiveStrategy = ot.FixedStrategy(productBasis, candidateBasisSize)
selectionAlgorithm = ot.LeastSquaresMetaModelSelectionFactory()
projectionStrategy = ot.LeastSquaresStrategy(selectionAlgorithm)
algo = ot.FunctionalChaosAlgorithm(
    inputSample, outputSample, distribution, adaptiveStrategy, projectionStrategy
)
algo.run()
result = algo.getResult()
result

# %%
# Get the number of coefficients in the PCE.
selectedBasisSizeSparse = result.getIndices().getSize()
print("Selected basis size = ", selectedBasisSizeSparse)

# %%
# We see that the number of selected coefficients is lower than
# the number of candidate coefficients.
# This may reduce overfitting and can produce a PCE with more
# accurate predictions.