1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111
|
// -*- C++ -*-
/**
* @brief The test file of GaussianProcessRandomVector class
*
* Copyright 2005-2025 Airbus-EDF-IMACS-ONERA-Phimeca
*
* This library is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this library. If not, see <http://www.gnu.org/licenses/>.
*
*/
#include "openturns/OT.hxx"
#include "openturns/OTtestcode.hxx"
using namespace OT;
using namespace OT::Test;
int main(int, char *[])
{
TESTPREAMBLE;
OStream fullprint(std::cout);
try
{
PlatformInfo::SetNumericalPrecision(2);
// Learning data
Point levels = {8, 5};
// Define the Box
Box box(levels);
// Get the input sample
Sample inputSample( box.generate() );
// Scale each direction
inputSample *= 10;
// Define model
Description inputDescription = {"x", "y"};
Description formula = {"cos(0.5*x) + sin(y)"} ;
const SymbolicFunction model(inputDescription, formula);
// Build the output sample
const Sample outputSample(model(inputSample));
// 2) Definition of exponential model
Point scale = {5.33532, 2.61534};
Point amplitude = {1.61536};
SquaredExponential covarianceModel(scale, amplitude);
// 3) Basis definition
Basis basis(ConstantBasisFactory(2).build());
// Gaussian Process fitter
GaussianProcessFitter algo(inputSample, outputSample, covarianceModel, basis);
algo.setOptimizeParameters(false);
algo.run();
// Regression
GaussianProcessRegression regression(algo.getResult());
regression.run();
// Get result
GaussianProcessRegressionResult result(regression.getResult());
// Get meta model
Function metaModel(result.getMetaModel());
// Interpolation error
assert_almost_equal(outputSample, metaModel(inputSample), 3.0e-5, 3.0e-5);
// variance is 0 on learning points
GaussianProcessConditionalCovariance gpcc(result);
const CovarianceMatrix var(gpcc.getConditionalCovariance(inputSample));
// assert_almost_equal could not be applied to matrices
Point covariancePoint(*var.getImplementation());
assert_almost_equal(covariancePoint, Point(covariancePoint.getSize()), 1e-6, 1e-6);
// Random vector evaluation
Sample unifRealization(Uniform(0.0, 10.0).getSample(2));
Point validationPoint(unifRealization.getImplementation()->getData());
GaussianProcessRandomVector rvector(result, validationPoint);
// Realization of the random vector
Point realization (rvector.getRealization());
std::cout << "Realization of the GPRV=" << realization << std::endl;
// Get a sample of size 10
Sample realizations(rvector.getSample(10));
std::cout << "Sample of realizations of the GPRV=" << realizations << std::endl;
}
catch (TestFailed & ex)
{
std::cerr << ex << std::endl;
return ExitCode::Error;
}
return ExitCode::Success;
}
|