论文标题

大规模多代理模型的神经参数校准

Neural parameter calibration for large-scale multi-agent models

论文作者

Gaskin, Thomas, Pavliotis, Grigorios A., Girolami, Mark

论文摘要

计算模型已成为定量科学中的强大工具,以了解随时间发展的复杂系统的行为。但是,它们通常包含潜在的大量自由参数,这些参数无法从理论中获得,但需要从数据中推断出来。社会科学,经济学或计算流行病学中的模型尤其如此。然而,许多当前参数估计方法在数学上涉及,并且运行速度慢。在本文中,我们提出了一种计算简单且快速的方法,可以使用神经微分方程来检索模型参数的准确概率密度。我们提出了一条管道,该管道包含多代理模型,该模型充当了普通或随机微分方程系统的前向求解器以及一个神经网络,然后从模型生成的数据中提取参数。这两个组合创建了一个强大的工具,即使对于非常大的系统,也可以快速估计模型参数的密度。我们演示了感染传播的SIR模型的合成时间序列数据的方法,并对网络上的Harris-Wilson经济活动模型进行了深入的分析,代表了非凸面问题。对于后者,我们将我们的方法应用于大伦敦的整个综合数据和经济活动的数据。我们发现,我们的方法比以前对同一数据集使用经典技术更准确地校准了数量级,同时运行的速度快于195至390倍。

Computational models have become a powerful tool in the quantitative sciences to understand the behaviour of complex systems that evolve in time. However, they often contain a potentially large number of free parameters whose values cannot be obtained from theory but need to be inferred from data. This is especially the case for models in the social sciences, economics, or computational epidemiology. Yet many current parameter estimation methods are mathematically involved and computationally slow to run. In this paper we present a computationally simple and fast method to retrieve accurate probability densities for model parameters using neural differential equations. We present a pipeline comprising multi-agent models acting as forward solvers for systems of ordinary or stochastic differential equations, and a neural network to then extract parameters from the data generated by the model. The two combined create a powerful tool that can quickly estimate densities on model parameters, even for very large systems. We demonstrate the method on synthetic time series data of the SIR model of the spread of infection, and perform an in-depth analysis of the Harris-Wilson model of economic activity on a network, representing a non-convex problem. For the latter, we apply our method both to synthetic data and to data of economic activity across Greater London. We find that our method calibrates the model orders of magnitude more accurately than a previous study of the same dataset using classical techniques, while running between 195 and 390 times faster.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源