Uncertainty evaluation for computationally expensive models
Description
Finite element methods for the analysis of flow measurements
is an example for computationally expensive models
In the analysis of computationally expensive problems the model function $$\mathbf{Y}=F(\mathbf{X})$$ is typically considered as a 'black box', so that evaluation of $F$ is possible, but evaluation of its derivatives with respect to the inputs is not. In computationally expensive models the evaluation of $F$ requires a significant amount of computer calculation time. This makes a straightforward application of Monte Carlo sampling for uncertainty evaluation impractical.
To this end, many approaches to uncertainty evaluation for expensive models consider alternative sampling methods. Examples are stratified sampling, importance sampling or Latin hypercube sampling, where the selection of realisations drawn from the input quantities follows a certain rule. The goal is to reduce the number of samples that have to be propagated by selecting representative samples.
A different approach is that of so called 'polynomial chaos', which was first used by Wiener in 1938 in the context of statistical mechanics. The main idea is that every stochastic quantity with finite second moment can be expanded as a sum of a set of orthogonal polynomials in an appropriate set of random variables. This set of of orthogonal polynomials of a random variable is called polynomial chaos. Its main advantage is that one can obtain a good representation of the complete stochastic behaviour of the outputs for fewer samples than are required by other methods. More information on polynomial chaos can be found here.
Instead of using smarter sampling methods one may replace the complex model function by a so called surrogate model. Examples are nearest neighbour interpolation and Gaussian process emulation and polynomial chaos.
To this end, many approaches to uncertainty evaluation for expensive models consider alternative sampling methods. Examples are stratified sampling, importance sampling or Latin hypercube sampling, where the selection of realisations drawn from the input quantities follows a certain rule. The goal is to reduce the number of samples that have to be propagated by selecting representative samples.
A different approach is that of so called 'polynomial chaos', which was first used by Wiener in 1938 in the context of statistical mechanics. The main idea is that every stochastic quantity with finite second moment can be expanded as a sum of a set of orthogonal polynomials in an appropriate set of random variables. This set of of orthogonal polynomials of a random variable is called polynomial chaos. Its main advantage is that one can obtain a good representation of the complete stochastic behaviour of the outputs for fewer samples than are required by other methods. More information on polynomial chaos can be found here.
Instead of using smarter sampling methods one may replace the complex model function by a so called surrogate model. Examples are nearest neighbour interpolation and Gaussian process emulation and polynomial chaos.
Research
Smart sampling
Smart sampling methods (such as polynomial chaos and Latin hypercube sampling) can drastically reduce the number of function evaluations and the resulting computational cost required for uncertainty evaluation compared to random sampling. Many sampling methods have specific assumptions (such as the inputs must be independent, the output quantity can be locally approximated as a polynomial function of the input quantities), which can limit their applicability. Thus, smart sampling methods have to be adapted and tested for their suitability for application in metrology.
Surrogate models
While the smart sampling methods try to minimise the computational expense by making the number of function evaluations needed for determination of uncertainties as small as possible, surrogate models replace the computationally expensive model by an approximate computationally cheap model such as polynomial or spline models. This involves a computationally expensive preprocessing step that samples the parameter space of the original physical model in order to derive the surrogate model.
Related journal papers
Authors

Title

Journal

Year


A. Allard, N. Fischer, G. Ebrard, B. Hay, P. M. Harris, L. Wright, D. Rochais, J. Mattout  A multithermogram based Bayesian model for the determination of the thermal diffusivity of a material  Metrologia  2016 
GJP Kok, AMH van der Veen, PM Harris, IM Smith, C Elster  Bayesian analysis of a flow meter calibration problem  Metrologia 52, 392399  2015 
G. Lindner, S. Schmelter, R. Model, A. Nowak, V. Ebert und M. Bär  A Computational Fluid Dynamics Study on the Gas Mixing Capabilities of a Multiple Inlet System  J. Fluids Eng, 138(3), 031302  2015 
Bloembergen, P., Battuello, M., Girard, F., Machin, G., Wright, L.  On the Influence of the Furnace and Cell Conditions on the Phase Transition of the Eutectic Co–C  International Journal of Thermophysics  2015 
T. J. Esward and L. Wright  Efficient updating of PDE models for metrology  Measurement, vol. 79  2015 
H. Gross, S. Heidenreich, M. A. Henn, F. Scholze and M. Bär  Modelling line edge roughness in periodicline space structures by Fourier optics to improve scatterometry  JEOS, vol. 9, 14003 (10pp)  2014 
S. Heidenreich, H. Gross and M. Bär  Alternative methods for uncertainty evaluations in EUV scatterometry  Proc. SPIE: Modeling aspects in Optical metrology IV  2013 
S. Alonso, K. John, and M. Bär  Complex wave patterns in an effective reactiondiffusion model for chemical reactions in microemulsions  J. Chem. Phys. 134, 094117  2011 
A. Allard, N. Fischer, F. Didieux, E. Guillaume and B. Iooss  Evaluation of the most influent input variables on quantities of interest in a fire simulation  J. Soc. Fr. Stat. 152,103117.  2011 
S. Demeyer, J.L. Foulley, N. Fischer and G. Saporta  Bayesian analysis of structural equation models using parameter expansion  Learning and data science, L. Bottou, F. Murtagh, M. GettlerSumma, B. Goldfarb, C. Pardoux and M. Touati (eds.), Chapman & Hall  2011 
L. Wright  Parameter Estimation from Laser Flash Experiment Data  Chapter 8 of Computational optimisation and applications in engineering and industry, S. Koziel and X. S. Yang (eds.), Springer  2011 
G. W. A. M. van der Heijden and R. Emardson  Multivariate measurements  Theory and methods of measurements with persons, B. Berglund, G. B. Rossi, J. Townsend and L. R. Pendrill (eds.), Psychology Press, Taylor & Francis.  2011 
H. Groß, J. Richter, A. Rathsfeld and M. Bär  Investigations of a robust profile model for the reconstruction of 2D periodic absorber lines in scatterometry  J. Eur. Opt. Soc. – Rapid 5  2010 
H. Gross, A. Rathsfeld, F. Scholze and M. Bär  Profile reconstruction in extreme ultraviolet (EUV) scatterometry: modeling and uncertainty estimates  Meas. Sci. Technol, vol. 20, 105102104112  2009 
S. Alonso, R. Kapral and M. Bär  Effective medium theory for reaction rates and diffusion constants in heterogeneous systems  Phys. Rev. Lett. 102, 238302  2009 
L. Wright, S. P. Robinson and V. F. Humphrey  Prediction of acoustic radiation from axisymmetric surfaces with arbitrary boundary conditions using the boundary element method on a distributed computing system  J. Acoust. Soc. Am. 125, 13741383  2009 
R. Model, A. Rathsfeld, H. Groß, M. Wurm and B. Bodermann  A scatterometry inverse problem in optical mask technology.  J. Phys. 135, 012071  2008 
H. Groß and A. Rathsfeld  Sensitivity analysis for indirect measurement in scatterometry and the reconstruction of periodic grating structures  Waves. Random. Complex. Media. 18, 129140  2008 
T. R. Emardson, P.O.J. Jarlemark, P. Floberg  Uncertainty evaluation in multivariate analysis  a test case study.  J. Math. Model. Algorithm. 4, 289305  2005 