File: lhs_simulation.rst

package info (click to toggle)
openturns 1.24-4
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 66,204 kB
  • sloc: cpp: 256,662; python: 63,381; ansic: 4,414; javascript: 406; sh: 180; xml: 164; yacc: 123; makefile: 98; lex: 55
file content (104 lines) | stat: -rw-r--r-- 4,091 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
.. _lhs_simulation:

Latin Hypercube Simulation
--------------------------

| Let us note
  :math:`\cD_f = \{\ux \in \Rset^{\inputDim} \space | \space  \model(\ux) \leq 0\}`.
  The goal is to estimate the following probability:

  .. math::

     \begin{aligned}
         P_f  & = \int_{\cD_f} f_{\uX}(\ux)d\ux\\
         & = \int_{\Rset^{\inputDim}} \mathbf{1}_{\{\model(\ux) \leq 0 \}}f_{\uX}(\ux)d\ux\\
         & = \Prob {\{\space \model(\uX) \leq 0 \}}
       \end{aligned}

| LHS or Latin Hypercube Sampling is a sampling method enabling to
  better cover the domain of variations of the input variables, thanks
  to a stratified sampling strategy. This method is applicable in the
  case of independent input variables. The sampling procedure is based
  on dividing the range of each variable into several intervals of equal
  probability. The sampling is undertaken as follows:

-  **Step 1**  The range of each input variable is stratified into
   isoprobabilistic cells,

-  **Step 2**  A cell is uniformly chosen among all the available cells,

-  **Step 3**  The random number is obtained by inverting the Cumulative
   Density Function locally in the chosen cell,

-  **Step 4**  All the cells having a common strate with the previous
   cell are put apart from the list of available cells.

| The estimator of the probability of failure with LHS is given by:

  .. math::

    \hat{P}_{f,LHS}^\sampleSize = \frac{1}{\sampleSize}\sum_{i=1}^\sampleSize \mathbf{1}_{\{\model(\uX^i) \leq 0 \}}

  where the sample of :math:`\{ \uX^i,i=1 \hdots \sampleSize \}` is obtained as
  described previously.

| One can show that:

  .. math::

    \Var{\hat{P}_{f,LHS}^\sampleSize} \leq \frac{\sampleSize}{\sampleSize-1} . \Var{    \hat{P}_{f,MC}^\sampleSize}

   where:

-  :math:`\Var {\hat{P}_{f,LHS}^\sampleSize}` is the variance of the estimator of
   the probability of exceeding a threshold computed by the LHS
   technique,

-  :math:`\Var {\hat{P}_{f,MC}^\sampleSize}` is the variance of the estimator of
   the probability of exceeding a threshold computed by a crude Monte
   Carlo method.

| With the notations

  .. math::

     \begin{aligned}
         \mu_\sampleSize &=& \frac{1}{\sampleSize}\sum_{i=1}^\sampleSize \mathbf{1}_{\{\model(\underline{x}_i)) \leq 0 \}}\\
         \sigma_\sampleSize^2 &=& \frac{1}{\sampleSize}\sum_{i=1}^\sampleSize (\mathbf{1}_{\{\model(\underline{x}^i)) \leq 0 \}} - \mu_\sampleSize)^2
       \end{aligned}

the asymptotic confidence interval of order :math:`1-\alpha` associated
to the estimator :math:`P_{f,LHS}^\sampleSize` is

.. math::

    [ \mu_\sampleSize - \frac{q_{1-\alpha / 2} . \sigma_\sampleSize}{\sqrt{\sampleSize}} \space ; \space \mu_\sampleSize + \frac{q_{1-\alpha / 2} . \sigma_\sampleSize}{\sqrt{\sampleSize}} ]

where :math:`q_{1-\alpha /2}` is the :math:`1-\alpha / 2` quantile from
the reduced standard gaussian law :math:`\cN(0,1)`.

It gives an unbiased estimate for :math:`P_f` (reminding that all input
variables must be independent).

This method is derived from a more general method called ’Stratified
Sampling’.


.. topic:: API:

    - See :class:`~openturns.LHSExperiment`


.. topic:: Examples:

    - See :doc:`/auto_reliability_sensitivity/reliability/plot_axial_stressed_beam`


.. topic:: References:

    - Mc Kay, Conover, Beckman, A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics, 21 (2), 1979
    - Latin Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems, J. Helton, F.J. Davis, 2002, SAND 2001-0417
    - The Design and Analysis of Computer Experiments by Thomas J. Santner, Brian J. Williams, and William I. Notz, Springer Verlag, New York 2003
    - A Central Limit Theorem for Latin Hypercube Sampling, Art B. Owen, 1992, Journal of the Royal Statistical Society. Series B (Methodological), Vol. 54, No. 2, pp. 541-551
    - Large Sample Properties of Simulations Using Latin Hypercube Sampling, Michael Stein, Technometrics, Vol. 29, No. 2 (May, 1987), pp. 143-151