1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170
|
// Copyright (C) 2025 EDF
// All Rights Reserved
// This code is published under the GNU Lesser General Public License (GNU LGPL)
#include <memory>
#include "geners/vectorIO.hh"
#include "geners/Record.hh"
#include <boost/lexical_cast.hpp>
#ifdef USE_MPI
#include "boost/mpi.hpp"
#include "StOpt/core/parallelism/all_gatherv.hpp"
#endif
#include <Eigen/Dense>
#ifdef _OPENMP
#include <omp.h>
#include "StOpt/core/utils/OpenmpException.h"
#endif
#include "StOpt/dp/OptimizerDPCutGridAdaptBase.h"
#include "StOpt/dp/TransitionStepRegressionDPCutGridAdapt.h"
#include "StOpt/regression/ContinuationCutsGridAdaptNonConcave.h"
#include "StOpt/regression/ContinuationCutsGridAdaptNonConcaveGeners.h"
using namespace Eigen;
using namespace StOpt;
using namespace std;
TransitionStepRegressionDPCutGridAdapt::TransitionStepRegressionDPCutGridAdapt(shared_ptr<GridAdaptBase> &p_pGridCurrent,
const shared_ptr<GridAdaptBase> &p_pGridPrevious,
const shared_ptr<OptimizerDPCutGridAdaptBase > &p_pOptimize
#ifdef USE_MPI
, const boost::mpi::communicator &p_world
#endif
):
m_pGridCurrent(p_pGridCurrent), m_pGridPrevious(p_pGridPrevious), m_pOptimize(p_pOptimize)
#ifdef USE_MPI
, m_world(p_world)
#endif
{
}
ArrayXXd TransitionStepRegressionDPCutGridAdapt::oneStep(const ArrayXXd &p_phiIn, const shared_ptr< BaseRegression> &p_condExp)
{
ArrayXXd phiOut;;
int dimGrid = m_pGridCurrent->getDim();
int nbPointsCur = m_pGridCurrent->getNbPoints();
#ifdef USE_MPI
int rank = m_world.rank();
int nbProc = m_world.size();
// allocate for solution
int npointPProcCur = (int)(nbPointsCur / nbProc);
int nRestPointCur = nbPointsCur % nbProc;
int iFirstPointCur = rank * npointPProcCur + (rank < nRestPointCur ? rank : nRestPointCur);
int iLastPointCur = iFirstPointCur + npointPProcCur + (rank < nRestPointCur ? 1 : 0);
ArrayXXd phiOutLoc(p_condExp->getNbSimul() * (dimGrid + 1), iLastPointCur - iFirstPointCur);
#else
int iFirstPointCur = 0;
int iLastPointCur = m_pGridCurrent->getNbPoints();
#endif
// allocate for solution
phiOut = ArrayXXd(p_condExp->getNbSimul() * (dimGrid + 1), m_pGridCurrent->getNbPoints());
// create continuation values
ContinuationCutsGridAdaptNonConcave contVal(m_pGridPrevious, p_condExp, p_phiIn);
// chunck size 1
int istock ;
#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic,1) private(istock)
#endif
for (istock = iFirstPointCur; istock < iLastPointCur; ++istock)
{
ArrayXd pointCoord = m_pGridCurrent->getCoord(istock);
// optimize the current point -> get back cuts per simulation and stock point
ArrayXd solution = m_pOptimize->stepOptimize(pointCoord, contVal);
#ifdef USE_MPI
phiOutLoc.col(istock - iFirstPointCur) = solution;
#else
// copie solution
phiOut.col(istock) = solution;
#endif
}
#ifdef USE_MPI
boost::mpi::all_gatherv<double>(m_world, phiOutLoc.data(), phiOutLoc.size(), phiOut.data());
#endif
// refinement
while (m_pOptimize->refineGrid(m_pGridCurrent, phiOut))
{
#ifdef USE_MPI
// allocate for solution
int nbPointsAdd = m_pGridCurrent->getNbPoints() - nbPointsCur ;
int npointPProcCurAdd = (int)(nbPointsAdd / nbProc);
int nRestPointCurAdd = nbPointsAdd % nbProc;
int iFirstPointCurAdd = nbPointsCur + rank * npointPProcCurAdd + (rank < nRestPointCurAdd ? rank : nRestPointCurAdd);
int iLastPointCurAdd = iFirstPointCurAdd + npointPProcCurAdd + (rank < nRestPointCurAdd ? 1 : 0);
ArrayXXd phiOutLocAdd(p_condExp->getNbSimul() * (dimGrid + 1), iLastPointCurAdd - iFirstPointCurAdd);
#else
int iFirstPointCurAdd = nbPointsCur;
int iLastPointCurAdd = m_pGridCurrent->getNbPoints();
#endif
// allocate for solution
phiOut.conservativeResize(p_condExp->getNbSimul() * (dimGrid + 1), m_pGridCurrent->getNbPoints());
// chunck size 1
#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic,1) private(istock)
#endif
for (istock = iFirstPointCurAdd; istock < iLastPointCurAdd; ++istock)
{
ArrayXd pointCoord = m_pGridCurrent->getCoord(istock);
// optimize the current point -> get back cuts per simulation and stock point
ArrayXd solution = m_pOptimize->stepOptimize(pointCoord, contVal);
#ifdef USE_MPI
phiOutLocAdd.col(istock - iFirstPointCurAdd) = solution;
#else
// copie solution
phiOut.col(istock) = solution;
#endif
// m_pOptimize->setRefinementToFalse();
}
#ifdef USE_MPI
boost::mpi::all_gatherv<double>(m_world, phiOutLocAdd.data(), phiOutLocAdd.size(), phiOut.col(nbPointsCur).data());
#endif
// update number of points
nbPointsCur = m_pGridCurrent->getNbPoints();
}
return phiOut;
}
void TransitionStepRegressionDPCutGridAdapt::dumpContinuationCutsValues(std::shared_ptr<gs::BinaryFileArchive> p_ar, const string &p_name, const int &p_iStep, const ArrayXXd &p_phiIn, const shared_ptr<BaseRegression> &p_condExp) const
{
#ifdef USE_MPI
if (m_world.rank() == 0)
{
#endif
ContinuationCutsGridAdaptNonConcave contVal(m_pGridPrevious, p_condExp, p_phiIn);
string stepString = boost::lexical_cast<string>(p_iStep) ;
*p_ar << gs::Record(contVal, (p_name + "Values").c_str(), stepString.c_str()) ;
p_ar->flush() ; // necessary for python mapping
#ifdef USE_MPI
}
#endif
}
void TransitionStepRegressionDPCutGridAdapt::dumpBellmanCutsValues(std::shared_ptr<gs::BinaryFileArchive> p_ar, const string &p_name, const int &p_iStep, const ArrayXXd &p_phiIn, const shared_ptr<BaseRegression> &p_condExp) const
{
#ifdef USE_MPI
if (m_world.rank() == 0)
{
#endif
ContinuationCutsGridAdaptNonConcave contVal(m_pGridCurrent, p_condExp, p_phiIn);
string stepString = boost::lexical_cast<string>(p_iStep) ;
*p_ar << gs::Record(contVal, (p_name + "Values").c_str(), stepString.c_str()) ;
p_ar->flush() ; // necessary for python mapping
#ifdef USE_MPI
}
#endif
}
|