Chang, Q., Zhou, C. C., Valdebenito, M. A., Liu, H. W., & Yue, Z. F. (2022). A novel sensitivity index for analyzing the response of numerical models with interval inputs. Comput. Methods in Appl. Mech. Eng., 400, 115509.
Abstract: This study proposes a novel sensitivity index to provide essential insights into numerical models whose inputs are characterized by intervals. Based on the interval model and its normalized form, the interval processes are introduced to define a new sensitivity index. The index can represent the individual or joint influence of the interval inputs on the output of a considered model. A double-loop strategy, based on global metamodeling and optimization, is established to calculate the index. Subsequently, the proposed index is theoretically compared with two other existing indices, and it is experimentally applied to three numerical examples and a practical engineering problem of a honeycomb sandwich radome. The results indicate that the proposed index is an effective tool for interval sensitivity analysis.
|
Dang, C., Valdebenito, M. A., Faes, M. G. R., Wei, P. F., & Beer, M. (2022). Structural reliability analysis: A Bayesian perspective. Struct. Saf., 99, 102259.
Abstract: Numerical methods play a dominant role in structural reliability analysis, and the goal has long been to produce a failure probability estimate with a desired level of accuracy using a minimum number of performance function evaluations. In the present study, we attempt to offer a Bayesian perspective on the failure probability integral estimation, as opposed to the classical frequentist perspective. For this purpose, a principled Bayesian Failure Probability Inference (BFPI) framework is first developed, which allows to quantify, propagate and reduce numerical uncertainty behind the failure probability due to discretization error. Especially, the posterior variance of the failure probability is derived in a semi-analytical form, and the Gaussianity of the posterior failure probability distribution is investigated numerically. Then, a Parallel Adaptive-Bayesian Failure Probability Learning (PA-BFPL) method is proposed within the Bayesian framework. In the PA-BFPL method, a variance-amplified importance sampling technique is presented to evaluate the posterior mean and variance of the failure probability, and an adaptive parallel active learning strategy is proposed to identify multiple updating points at each iteration. Thus, a novel advantage of PA-BFPL is that both prior knowledge and parallel computing can be used to make inference about the failure probability. Four numerical examples are investigated, indicating the potential benefits by advocating a Bayesian approach to failure probability estimation.
|
Dang, C., Wei, P. F., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Interval uncertainty propagation by a parallel Bayesian global optimization method. Appl. Math. Model., 108, 220–235.
Abstract: This paper is concerned with approximating the scalar response of a complex computational model subjected to multiple input interval variables. Such task is formulated as finding both the global minimum and maximum of a computationally expensive black-box function over a prescribed hyper-rectangle. On this basis, a novel non-intrusive method, called `triple-engine parallel Bayesian global optimization', is proposed. The method begins by assuming a Gaussian process prior (which can also be interpreted as a surrogate model) over the response function. The main contribution lies in developing a novel infill sampling criterion, i.e., triple-engine pseudo expected improvement strategy, to identify multiple promising points for minimization and/or maximization based on the past observations at each iteration. By doing so, these identified points can be evaluated on the real response function in parallel. Besides, another potential benefit is that both the lower and upper bounds of the model response can be obtained with a single run of the developed method. Four numerical examples with varying complexity are investigated to demonstrate the proposed method against some existing techniques, and results indicate that significant computational savings can be achieved by making full use of prior knowledge and parallel computing.
|
Dang, C., Wei, P. F., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Parallel adaptive Bayesian quadrature for rare event estimation. Reliab. Eng. Syst. Saf., 225, 108621.
Abstract: Various numerical methods have been extensively studied and used for reliability analysis over the past several decades. However, how to understand the effect of numerical uncertainty (i.e., numerical error due to the discretization of the performance function) on the failure probability is still a challenging issue. The active learning probabilistic integration (ALPI) method offers a principled approach to quantify, propagate and reduce the numerical uncertainty via computation within a Bayesian framework, which has not been fully investigated in context of probabilistic reliability analysis. In this study, a novel method termed `Parallel Adaptive Bayesian Quadrature' (PABQ) is proposed on the theoretical basis of ALPI, and is aimed at broadening its scope of application. First, the Monte Carlo method used in ALPI is replaced with an importance ball sampling technique so as to reduce the sample size that is needed for rare failure event estimation. Second, a multi-point selection criterion is proposed to enable parallel distributed processing. Four numerical examples are studied to demonstrate the effectiveness and efficiency of the proposed method. It is shown that PABQ can effectively assess small failure probabilities (e.g., as low as 10(-7)) with a minimum number of iterations by taking advantage of parallel computing.
|
Ding, C., Dang, C., Valdebenito, M. A., Faes, M. G. R., Broggi, M., & Beer, M. (2023). First-passage probability estimation of high-dimensional nonlinear stochastic dynamic systems by a fractional moments-based mixture distribution approach. Mech. Syst. Sig. Process., 185, 109775.
Abstract: First-passage probability estimation of high-dimensional nonlinear stochastic dynamic systems is a significant task to be solved in many science and engineering fields, but remains still an open challenge. The present paper develops a novel approach, termed 'fractional moments-based mixture distribution', to address such challenge. This approach is implemented by capturing the extreme value distribution (EVD) of the system response with the concepts of fractional moment and mixture distribution. In our context, the fractional moment itself is by definition a high-dimensional integral with a complicated integrand. To efficiently compute the fractional moments, a parallel adaptive sampling scheme that allows for sample size extension is developed using the refined Latinized stratified sampling (RLSS). In this manner, both variance reduction and parallel computing are possible for evaluating the fractional moments. From the knowledge of low-order fractional moments, the EVD of interest is then expected to be reconstructed. Based on introducing an extended inverse Gaussian distribution and a log extended skew-normal distribution, one flexible mixture distribution model is proposed, where its fractional moments are derived in analytic form. By fitting a set of fractional moments, the EVD can be recovered via the proposed mixture model. Accordingly, the first-passage probabilities under different thresholds can be obtained from the recovered EVD straightforwardly. The performance of the proposed method is verified by three examples consisting of two test examples and one engineering problem.
|
Faes, M. G. R., & Valdebenito, M. A. (2021). Fully decoupled reliability-based optimization of linear structures subject to Gaussian dynamic loading considering discrete design variables. Mech. Syst. Sig. Process., 156, 107616.
Abstract: Reliability-based optimization (RBO) offers the possibility of finding an optimal design for a system according to a prescribed criterion while explicitly taking into account the effects of uncertainty. However, due to the necessity of solving simultaneously a reliability problem nested in an optimization procedure, the corresponding computational cost is usually high, impeding the applicability of the methods. This computational cost is even further enlarged when one or several design variables must belong to a discrete set, due to the requirement of resorting to integer programming optimization algorithms. To alleviate this issue, this contribution proposes a fully decoupled approach for a specific class of problems, namely minimization of the failure probability of a linear system subjected to an uncertain dynamic load of the Gaussian type, under the additional constraint that the design variables are integer-valued. Specifically, by using the operator norm framework, as developed by the authors in previous work, this paper shows that by reducing the RBO problem with discrete design variables to the solution of a single deterministic optimization problem followed by a single reliability analysis, a large gain in numerical efficiency can be obtained without compromising the accuracy of the resulting optimal design. The application and capabilities of the proposed approach are illustrated by means of three examples.
|
Faes, M. G. R., Valdebenito, M. A., Yuan, X. K., Wei, P. F., & Beer, M. (2021). Augmented reliability analysis for estimating imprecise first excursion probabilities in stochastic linear dynamics. Adv. Eng. Softw., 155, 102993.
Abstract: Imprecise probability allows quantifying the level of safety of a system taking into account the effect of both aleatory and epistemic uncertainty. The practical estimation of an imprecise probability is usually quite demanding from a numerical viewpoint, as it is necessary to propagate separately both types of uncertainty, leading in practical cases to a nested implementation in the so-called double loop approach. In view of this issue, this contribution presents an alternative approach that avoids the double loop by replacing the imprecise probability problem by an augmented, purely aleatory reliability analysis. Then, with the help of Bayes' theorem, it is possible to recover an expression for the failure probability as an explicit function of the imprecise parameters from the augmented reliability problem, which ultimately allows calculating the imprecise probability. The implementation of the proposed framework is investigated within the context of imprecise first excursion probability estimation of uncertain linear structures subject to imprecisely defined stochastic quantities and crisp stochastic loads. The associated augmented reliability problem is solved within the context of Directional Importance Sampling, leading to an improved accuracy at reduced numerical costs. The application of the proposed approach is investigated by means of two examples. The results obtained indicate that the proposed approach can be highly efficient and accurate.
|
Fina, M., Lauff, C., Faes, M. G. R., Valdebenito, M. A., Wagner, W., & Freitag, S. (2023). Bounding imprecise failure probabilities in structural mechanics based on maximum standard deviation. Struct. Saf., 101, 102293.
Abstract: This paper proposes a framework to calculate the bounds on failure probability of linear structural systems whose performance is affected by both random variables and interval variables. This kind of problems is known to be very challenging, as it demands coping with aleatoric and epistemic uncertainty explicitly. Inspired by the framework of the operator norm theorem, it is proposed to consider the maximum standard deviation of the structural response as a proxy for detecting the crisp values of the interval parameters, which yield the bounds of the failure probability. The scope of application of the proposed approach comprises linear structural systems, whose properties may be affected by both aleatoric and epistemic uncertainty and that are subjected to (possibly imprecise) Gaussian loading. Numerical examples indicate that the application of such proxy leads to substantial numerical advantages when compared to a traditional double-loop approach for coping with imprecise failure probabilities. In fact, the proposed framework allows to decouple the propagation of aleatoric and epistemic uncertainty.
|
Jerez, D. J., Jensen, H. A., Valdebenito, M. A., Misraji, M. A., Mayorga, F., & Beer, M. (2022). On the use of Directional Importance Sampling for reliability-based design and optimum design sensitivity of linear stochastic structures. Probabilistic Eng. Mech., 70, 103368.
Abstract: This contribution focuses on reliability-based design and optimum design sensitivity of linear dynamical structural systems subject to Gaussian excitation. Directional Importance Sampling (DIS) is implemented for reliability assessment, which allows to obtain first-order derivatives of the failure probabilities as a byproduct of the sampling process. Thus, gradient-based solution schemes can be adopted by virtue of this feature. In particular, a class of feasible-direction interior point algorithms are implemented to obtain optimum designs, while a direction-finding approach is considered to obtain optimum design sensitivity measures as a post -processing step of the optimization results. To show the usefulness of the approach, an example involving a building structure is studied. Overall, the reliability sensitivity analysis framework enabled by DIS provides a potentially useful tool to address a practical class of design optimization problems.
|
Ni, P. H., Jerez, D. J., Fragkoulis, V. C., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Operator Norm-Based Statistical Linearization to Bound the First Excursion Probability of Nonlinear Structures Subjected to Imprecise Stochastic Loading. ASCE-ASME J. Risk Uncertain. Eng. Syst. A-Civ. Eng., 8(1), 04021086.
Abstract: This paper presents a highly efficient approach for bounding the responses and probability of failure of nonlinear models subjected to imprecisely defined stochastic Gaussian loads. Typically, such computations involve solving a nested double-loop problem, where the propagation of the aleatory uncertainty has to be performed for each realization of the epistemic parameters. Apart from near-trivial cases, such computation is generally intractable without resorting to surrogate modeling schemes, especially in the context of performing nonlinear dynamical simulations. The recently introduced operator norm framework allows for breaking this double loop by determining those values of the epistemic uncertain parameters that produce bounds on the probability of failure a priori. However, the method in its current form is only applicable to linear models due to the adopted assumptions in the derivation of the involved operator norms. In this paper, the operator norm framework is extended and generalized by resorting to the statistical linearization methodology to
|
Song, J. W., Wei, P. F., Valdebenito, M. A., Faes, M., & Beer, M. (2021). Data-driven and active learning of variance-based sensitivity indices with Bayesian probabilistic integration. Mech. Syst. Sig. Process., 163, 108106.
Abstract: Variance-based sensitivity indices play an important role in scientific computation and data mining, thus the significance of developing numerical methods for efficient and reliable estimation of these sensitivity indices based on (expensive) computer simulators and/or data cannot be emphasized too much. In this article, the estimation of these sensitivity indices is treated as a statistical inference problem. Two principle lemmas are first proposed as rules of thumb for making the inference. After that, the posterior features for all the (partial) variance terms involved in the main and total effect indices are analytically derived (not in closed form) based on Bayesian Probabilistic Integration (BPI). This forms a data-driven method for estimating the sensitivity indices as well as the involved discretization errors. Further, to improve the efficiency of the developed method for expensive simulators, an acquisition function, named Posterior Variance Contribution (PVC), is utilized for realizing optimal designs of experiments, based on which an adaptive BPI method is established. The application of this framework is illustrated for the calculation of the main and total effect indices, but the proposed two principle lemmas also apply to the calculation of interaction effect indices. The performance of the development is demonstrated by an illustrative numerical example and three engineering benchmarks with finite element models.
|
Valdebenito, M. A., Misraji, M. A., Jensen, H. A., & Mayorga, C. F. (2021). Sensitivity estimation of first excursion probabilities of linear structures subject to stochastic Gaussian loading. Comput. Struct., 248, 106482.
Abstract: This contribution focuses on evaluating the sensitivity associated with first excursion probabilities of linear structural systems subject to stochastic Gaussian loading. The sensitivity measure considered is the partial derivative of the probability with respect to parameters that affect the structural response, such as dimensions of structural elements. The actual calculation of the sensitivity demands solving high dimensional integrals over hypersurfaces, which can be challenging from a numerical viewpoint. Hence, sensitivity evaluation is cast within the context of a reliability analysis that is conducted with Directional Importance Sampling. In this way, the sought sensitivity is obtained as a byproduct of the calculation of the failure probability, where the post-processing step demands performing a sensitivity analysis of the unit impulse response functions of the structure. Thus, the sensitivity is calculated using sampling by means of an estimator, whose precision can be quantified in terms of its standard deviation. Numerical examples involving both small- and large-scale structural models illustrate the procedure for probability sensitivity estimation. (C) 2021 Elsevier Ltd. All rights reserved.
|
Valdebenito, M. A., Wei, P. F., Song, J. W., Beer, M., & Broggi, M. (2021). Failure probability estimation of a class of series systems by multidomain Line Sampling. Reliab. Eng. Syst. Saf., 213, 107673.
Abstract: This contribution proposes an approach for the assessment of the failure probability associated with a particular class of series systems. The type of systems considered involves components whose response is linear with respect to a number of Gaussian random variables. Component failure occurs whenever this response exceeds prescribed deterministic thresholds. We propose multidomain Line Sampling as an extension of the classical Line Sampling to work with a large number of components at once. By taking advantage of the linearity of the performance functions involved, multidomain Line Sampling explores the interactions that occur between failure domains associated with individual components in order to produce an estimate of the failure probability. The performance and effectiveness of multidomain Line Sampling is illustrated by means of two test problems and an application example, indicating that this technique is amenable for treating problems comprising both a large number of random variables and a large number of components.
|
Yuan, X. K., Faes, M. G. R., Liu, S. L., Valdebenito, M. A., & Beer, M. (2021). Efficient imprecise reliability analysis using the Augmented Space Integral. Reliab. Eng. Syst. Saf., 210, 107477.
Abstract: This paper presents an efficient approach to compute the bounds on the reliability of a structure subjected to uncertain parameters described by means of imprecise probabilities. These imprecise probabilities arise from epistemic uncertainty in the definition of the hyper-parameters of a set of random variables that describe aleatory uncertainty in some of the structure's properties. Typically, such calculation involves the solution of a so-called double-loop problem, where a crisp reliability problem is repeatedly solved to determine which realization of the epistemic uncertainties yields the worst or best case with respect to structural safety. The approach in this paper aims at decoupling this double loop by virtue of the Augmented Space Integral. The core idea of the method is to infer a functional relationship between the epistemically uncertain hyper-parameters and the probability of failure. Then, this functional relationship can be used to determine the best and worst case behavior with respect to the probability of failure. Three case studies are included to illustrate the effectiveness and efficiency of the developed methods.
|
Yuan, X. K., Liu, S. L., Faes, M., Valdebenito, M. A., & Beer, M. (2021). An efficient importance sampling approach for reliability analysis of time-variant structures subject to time-dependent stochastic load. Mech. Syst. Sig. Process., 159, 107699.
Abstract: Structural performance is affected by deterioration processes and external loads. Both effects may change over time, posing a challenge for conducting reliability analysis. In such context, this contribution aims at assessing the reliability of structures where some of its parameters are modeled as random variables, possibly including deterioration processes, and which are subjected to stochastic load processes. The approach is developed within the framework of importance sampling and it is based on the concept of composite limit states, where the time-dependent reliability problem is transformed into a series system with multiple performance functions. Then, an efficient two-step importance sampling density function is proposed, which splits time-invariant parameters (random variables) from the time-variant ones (stochastic processes). This importance sampling scheme is geared towards a particular class of problems, where the performance of the structural system exhibits a linear dependency with respect to the stochastic load for fixed time. This allows calculating the reliability associated with the series system most efficiently. Practical examples illustrate the performance of the proposed approach.
|
Yuan, X. K., Liu, S. L., Valdebenito, M. A., Faes, M. G. R., Jerez, D. J., Jensen, H. A., et al. (2021). Decoupled reliability-based optimization using Markov chain Monte Carlo in augmented space. Adv. Eng. Softw., 157, 103020.
Abstract: An efficient framework is proposed for reliability-based design optimization (RBDO) of structural systems. The RBDO problem is expressed in terms of the minimization of the failure probability with respect to design variables which correspond to distribution parameters of random variables, e.g. mean or standard deviation. Generally, this problem is quite demanding from a computational viewpoint, as repeated reliability analyses are involved. Hence, in this contribution, an efficient framework for solving a class of RBDO problems without even a single reliability analysis is proposed. It makes full use of an established functional relationship between the probability of failure and the distribution design parameters, which is termed as the failure probability function (FPF). By introducing an instrumental variability associated with the distribution design parameters, the target FPF is found to be proportional to a posterior distribution of the design parameters conditional on the occurrence of failure in an augmented space. This posterior distribution is derived and expressed as an integral, which can be estimated through simulation. An advanced Markov chain algorithm is adopted to efficiently generate samples that follow the aforementioned posterior distribution. Also, an algorithm that re-uses information is proposed in combination with sequential approximate optimization to improve the efficiency. Numeric examples illustrate the performance of the proposed framework.
|
Yuan, X. K., Liu, S. L., Valdebenito, M. A., Gu, J., & Beer, M. (2021). Efficient procedure for failure probability function estimation in augmented space. Struct. Saf., 92, 102104.
Abstract: An efficient procedure is proposed to estimate the failure probability function (FPF) with respect to design variables, which correspond to distribution parameters of basic structural random variables. The proposed procedure is based on the concept of an augmented reliability problem, which assumes the design variables as uncertain by assigning a prior distribution, transforming the FPF into an expression that includes the posterior distribution of those design variables. The novel contribution of this work consists of expressing this target posterior distribution as an integral, allowing it to be estimated by means of sampling, and no distribution fitting is needed, leading to an efficient estimation of FPF. The proposed procedure is implemented within three different simulation strategies: Monte Carlo simulation, importance sampling and subset simulation; for each of these cases, expressions for the coefficient of variation of the FPF estimate are derived. Numerical examples illustrate performance of the proposed approaches.
|
Zhou, C. C., Zhang, H. L., Valdebenito, M. A., & Zhao, H. D. (2022). A general hierarchical ensemble-learning framework for structural reliability analysis. Reliab. Eng. Syst. Saf., 225, 108605.
Abstract: Existing ensemble-learning methods for reliability analysis are usually developed by combining ensemble learning with a learning function. A commonly used strategy is to construct the initial training set and the test set in advance. The training set is used to train the initial ensemble model, while the test set is adopted to allocate weight factors and check the convergence criterion. Reliability analysis focuses more on the local prediction accuracy near the limit state surface than the global prediction accuracy in the entire space. However, samples in the initial training set and the test set are generally randomly generated, which will result in the learning function failing to find the real ???best??? update samples and the allocation of weight factors may be suboptimal or even unreasonable. These two points have a detrimental impact on the overall performance of the ensemble model. Thus, we propose a general hierarchical ensemble-learning framework (ELF) for reliability analysis, which consists of two-layer models and three different phases. A novel method called CESM-ELF is proposed by embedding the classical ensemble of surrogate models (CESM) in the proposed ELF. Four examples are investigated to show that CESM-ELF outperforms CESM in prediction accuracy and is more efficient in some cases.
|