File: plot_chaos_ishigami.py

package info (click to toggle)
openturns 1.24-4
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 66,204 kB
  • sloc: cpp: 256,662; python: 63,381; ansic: 4,414; javascript: 406; sh: 180; xml: 164; yacc: 123; makefile: 98; lex: 55
file content (272 lines) | stat: -rw-r--r-- 8,019 bytes parent folder | download | duplicates (5)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
"""
Create a polynomial chaos for the Ishigami function: a quick start guide to polynomial chaos
============================================================================================
"""

# %%
#
# In this example, we create a polynomial chaos for the
# :ref:`Ishigami function<use-case-ishigami>`.
# We create a sparse polynomial with maximum total degree equal to 8
# using the :class:`~openturns.FunctionalChaosAlgorithm` class.
#

# %%
# Define the model
# ----------------

# %%
from openturns.usecases import ishigami_function
import openturns as ot
import openturns.viewer as otv
import math

ot.Log.Show(ot.Log.NONE)
ot.RandomGenerator.SetSeed(0)

# %%
# We load the Ishigami model :
im = ishigami_function.IshigamiModel()

# %%
# The `IshigamiModel` data class contains the input distribution :math:`\vect{X}=(X_1, X_2, X_3)` in `im.inputDistribution` and the Ishigami function in `im.model`.
# We also have access to the input variable names with
input_names = im.inputDistribution.getDescription()


# %%
# Draw the function
# -----------------

# %%
# Create a training sample

# %%
sampleSize = 1000
inputSample = im.inputDistribution.getSample(sampleSize)
outputSample = im.model(inputSample)


# %%
# Display relationships between the output and the inputs

# %%
grid = ot.VisualTest.DrawPairsXY(inputSample, outputSample)
view = otv.View(grid, figure_kw={"figsize": (12.0, 4.0)})

# %%
graph = ot.HistogramFactory().build(outputSample).drawPDF()
view = otv.View(graph)

# %%
# We see that the distribution of the output has two modes.

# %%
# Create the polynomial chaos model
# ---------------------------------

# %%
# Create a training sample

# %%
sampleSize = 100
inputTrain = im.inputDistribution.getSample(sampleSize)
outputTrain = im.model(inputTrain)

# %%
# Create the chaos.
#
# We could use only the input and output training samples: in this case, the distribution of the input sample is computed by selecting the distribution that has the best fit.

# %%
chaosalgo = ot.FunctionalChaosAlgorithm(inputTrain, outputTrain)

# %%
# Since the input distribution is known in our particular case, we instead create
# the multivariate basis from the distribution.
# We define the orthogonal basis used to expand the function.
# We see that each input has the uniform distribution, which corresponds to the
# Legendre polynomials.

# %%
multivariateBasis = ot.OrthogonalProductPolynomialFactory([im.X1, im.X2, im.X3])
multivariateBasis

# %%
# Then we create the sparse polynomial chaos expansion using regression and
# the LARS selection model method.

# %%
selectionAlgorithm = ot.LeastSquaresMetaModelSelectionFactory()
projectionStrategy = ot.LeastSquaresStrategy(selectionAlgorithm)
totalDegree = 8
enumerateFunction = multivariateBasis.getEnumerateFunction()
basisSize = enumerateFunction.getBasisSizeFromTotalDegree(totalDegree)
adaptiveStrategy = ot.FixedStrategy(multivariateBasis, basisSize)
chaosAlgo = ot.FunctionalChaosAlgorithm(
    inputTrain, outputTrain, im.inputDistribution, adaptiveStrategy, projectionStrategy
)

# %%
# The coefficients of the polynomial expansion can then be estimated
# using the :meth:`~openturns.FunctionalChaosAlgorithm.run` method.

# %%
chaosAlgo.run()

# %%
# The :meth:`~openturns.FunctionalChaosAlgorithm.getResult` method returns the
# result.
chaosResult = chaosAlgo.getResult()

# %%
# The polynomial chaos result provides a pretty-print.

# %%
chaosResult

# %%
# Get the metamodel.

# %%
metamodel = chaosResult.getMetaModel()

# %%
# In order to validate the metamodel, we generate a test sample.

# %%
n_valid = 1000
inputTest = im.inputDistribution.getSample(n_valid)
outputTest = im.model(inputTest)
metamodelPredictions = metamodel(inputTest)
val = ot.MetaModelValidation(outputTest, metamodelPredictions)
r2Score = val.computeR2Score()[0]
r2Score

# %%
# The :math:`R^2` is very close to 1: the metamodel is accurate.

# %%
graph = val.drawValidation()
graph.setTitle("R2=%.2f%%" % (r2Score * 100))
view = otv.View(graph)

# %%
# The metamodel has a good predictivity, since the points are almost on the first diagonal.

# %%
# Compute mean and variance
# -------------------------

# %%
# The mean and variance of a polynomial chaos expansion are computed
# using the coefficients of the expansion.
# This can be done using :class:`~openturns.FunctionalChaosRandomVector`.

# %%
randomVector = ot.FunctionalChaosRandomVector(chaosResult)
mean = randomVector.getMean()
print("Mean=", mean)
covarianceMatrix = randomVector.getCovariance()
print("Covariance=", covarianceMatrix)
outputDimension = outputTrain.getDimension()
stdDev = ot.Point(outputDimension)
for i in range(outputDimension):
    stdDev[i] = math.sqrt(covarianceMatrix[i, i])
print("Standard deviation=", stdDev)

# %%
# Compute and print Sobol' indices
# --------------------------------

# %%
# By default, printing the object will print the Sobol' indices
# and the multi-indices ordered by decreasing part of variance.
# If a multi-index has a part of variance which is lower
# than some threshold, it is not printed.
# This threshold can be customized using the
# `FunctionalChaosSobolIndices-VariancePartThreshold` key of the
# :class:`~openturns.ResourceMap`.

# %%
chaosSI = ot.FunctionalChaosSobolIndices(chaosResult)
chaosSI

# %%
# We notice that a coefficient with marginal degree equal to 6 has a significant impact on the output variance.
# Hence, we cannot get a satisfactory polynomial chaos with total degree less that 6.

# %%
# Draw Sobol' indices.

# %%
dim_input = im.inputDistribution.getDimension()
first_order = [chaosSI.getSobolIndex(i) for i in range(dim_input)]
total_order = [chaosSI.getSobolTotalIndex(i) for i in range(dim_input)]
input_names = im.model.getInputDescription()
graph = ot.SobolIndicesAlgorithm.DrawSobolIndices(input_names, first_order, total_order)
view = otv.View(graph)


# %%
# The variable which has the largest impact on the output is, taking
# interactions into account, :math:`X_1`.
# We see that :math:`X_1` has interactions with other variables, since the first
# order indice is less than the total order indice.
# At first order, :math:`X_3` has no interaction with other variables since its
# first order indice is close to zero.

# %%
# Computing the accuracy
# ----------------------

# %%
# The interesting point with the Ishigami function is that the exact Sobol' indices are known.
# We can use that property in order to compute the absolute error on the Sobol' indices for the polynomial chaos.

# %%
# To make the comparisons simpler, we gather the results into a list.

# %%
S_exact = [im.S1, im.S2, im.S3]
ST_exact = [im.ST1, im.ST2, im.ST3]

# %%
# Then we perform a loop over the input dimension and compute the absolute error on the Sobol' indices.

# %%
for i in range(im.dim):
    absoluteErrorS = abs(first_order[i] - S_exact[i])
    absoluteErrorST = abs(total_order[i] - ST_exact[i])
    print(
        "X%d, Abs.Err. on S=%.1e, Abs.Err. on ST=%.1e"
        % (i + 1, absoluteErrorS, absoluteErrorST)
    )

# %%
# We see that the indices are correctly estimated with a low accuracy even if we have used only 100 function evaluations.
# This shows the good performance of the polynomial chaos in this case.

# %%
# We can compute the part of the variance of the output explained by each multi-index.

# %%
partOfVariance = chaosSI.getPartOfVariance()
chaosResult = chaosSI.getFunctionalChaosResult()
coefficients = chaosResult.getCoefficients()
orthogonalBasis = chaosResult.getOrthogonalBasis()
enumerateFunction = orthogonalBasis.getEnumerateFunction()
indices = chaosResult.getIndices()
basisSize = indices.getSize()
print("Index, global index, multi-index, coefficient, part of variance")
for i in range(basisSize):
    globalIndex = indices[i]
    multiIndex = enumerateFunction(globalIndex)
    if partOfVariance[i] > 1.0e-3:
        print(
            "%d, %d, %s, %.4f, %.4f"
            % (i, globalIndex, multiIndex, coefficients[i, 0], partOfVariance[i])
        )

# %%
view.show()