论文标题
部分可观测时空混沌系统的无模型预测
General multi-fidelity surrogate models: Framework and active learning strategies for efficient rare event simulation
论文作者
论文摘要
使用高保真计算模型估算复杂现实世界系统的失败概率通常非常昂贵,尤其是在概率很小的情况下。利用低保真模型可以使此过程更加可行,但是从多个低保真性和高保真模型中合并信息会带来一些挑战。本文提出了一种强大的多忠诚替代建模策略,其中使用主动学习策略使用积分模拟框架内的现有模型充分性评估设置进行有效可靠性分析,以使用主动学习策略来组装多保真替代策略。通过首先将高斯过程校正应用于每个低保真模型,并根据模型的局部预测准确性和成本分配模型概率来组装多保真替代物。提出了三种策略,将这些个体替代物融合到基于模型平均和确定性/随机模型选择的总体替代模型中。这些策略还决定了哪些模型评估是必要的。没有关于低保真模型之间关系的假设,而高保真模型被认为是最准确,最昂贵的模型。通过两项分析和两个数值案例研究,包括一项案例研究,评估了三结构性引起的(TRISO)核燃料的失败概率,该算法被证明是高度准确的,同时大大减少了高效率模型呼叫的数量(并因此计算成本)。
Estimating the probability of failure for complex real-world systems using high-fidelity computational models is often prohibitively expensive, especially when the probability is small. Exploiting low-fidelity models can make this process more feasible, but merging information from multiple low-fidelity and high-fidelity models poses several challenges. This paper presents a robust multi-fidelity surrogate modeling strategy in which the multi-fidelity surrogate is assembled using an active learning strategy using an on-the-fly model adequacy assessment set within a subset simulation framework for efficient reliability analysis. The multi-fidelity surrogate is assembled by first applying a Gaussian process correction to each low-fidelity model and assigning a model probability based on the model's local predictive accuracy and cost. Three strategies are proposed to fuse these individual surrogates into an overall surrogate model based on model averaging and deterministic/stochastic model selection. The strategies also dictate which model evaluations are necessary. No assumptions are made about the relationships between low-fidelity models, while the high-fidelity model is assumed to be the most accurate and most computationally expensive model. Through two analytical and two numerical case studies, including a case study evaluating the failure probability of Tristructural isotropic-coated (TRISO) nuclear fuels, the algorithm is shown to be highly accurate while drastically reducing the number of high-fidelity model calls (and hence computational cost).