Home  << 1 >> 
Barrera, J., & Lagos, G. (2020). Limit distributions of the upper order statistics for the Levyfrailty MarshallOlkin distribution. Extremes, 23, 603–628.
Abstract: The MarshallOlkin (MO) distribution is considered a key model in reliability theory and in risk analysis, where it is used to model the lifetimes of dependent components or entities of a system and dependency is induced by “shocks” that hit one or more components at a time. Of particular interest is the Levyfrailty subfamily of the MarshallOlkin (LFMO) distribution, since it has few parameters and because the nontrivial dependency structure is driven by an underlying Levy subordinator process. The main contribution of this work is that we derive the precise asymptotic behavior of the upper order statistics of the LFMO distribution. More specifically, we consider a sequence ofnunivariate random variables jointly distributed as a multivariate LFMO distribution and analyze the order statistics of the sequence asngrows. Our main result states that if the underlying Levy subordinator is in the normal domain of attraction of a stable distribution with index of stability alpha then, after certain logarithmic centering and scaling, the upper order statistics converge in distribution to a stable distribution if alpha> 1 or a simple transformation of it if alpha <= 1. Our result can also give easily computable confidence intervals for the last failure times, provided that a proper convergence analysis is carried out first.

Barrera, J., Moreno, E., Munoz, G., & Romero, P. (2022). Exact reliability optimization for seriesparallel graphs using convex envelopes. Networks, 80(2), 235–248.
Abstract: Given its wide spectrum of applications, the classical problem of allterminal network reliability evaluation remains a highly relevant problem in network design. The associated optimization problemto find a network with the best possible reliability under multiple constraintspresents an even more complex challenge, which has been addressed in the scientific literature but usually under strong assumptions over failures probabilities and/or the network topology. In this work, we propose a novel reliability optimization framework for network design with failures probabilities that are independent but not necessarily identical. We leverage the lineartime evaluation procedure for network reliability in the seriesparallel graphs of Satyanarayana and Wood (1985) to formulate the reliability optimization problem as a mixedinteger nonlinear optimization problem. To solve this nonconvex problem, we use classical convex envelopes of bilinear functions, introduce custom cutting planes, and propose a new family of convex envelopes for expressions that appear in the evaluation of network reliability. Furthermore, we exploit the refinements produced by spatial branchandbound to locally strengthen our convex relaxations. Our experiments show that, using our framework, one can efficiently obtain optimal solutions in challenging instances of this problem.

Beck, A. T., Ribeiro, L. D., Valdebenito, M., & Jensen, H. (2022). RiskBased Design of Regular Plane Frames Subject to Damage by Abnormal Events: A Conceptual Study. J. Struct. Eng., 148(1), 04021229.
Abstract: Constructed facilities should be robust with respect to the loss of loadbearing elements due to abnormal events. Yet, strengthening structures to withstand such damage has a significant impact on construction costs. Strengthening costs should be justified by the threat and should result in smaller expected costs of progressive collapse. In regular frame structures, beams and columns compete for the strengthening budget. In this paper, we present a riskbased formulation to address the optimal design of regular plane frames under element loss conditions. We address the threat probabilities for which strengthening has better costbenefit than usual design, for different frame configurations, and study the impacts of strengthening extent and cost. The riskbased optimization reveals optimum points of compromise between competing failure modes: local bending of beams, local crushing of columns, and global pancake collapse, for frames of different aspect ratios. The conceptual study is based on a simple analytical model for progressive collapse, but it provides relevant insight for the design and strengthening of real structures.

Bergsten, G. J., Pascucci, I., HardegreeUllman, K. K., Fernandes, R. B., Christiansen, J. L., & Mulders, G. D. (2023). No Evidence for More Earthsized Planets in the Habitable Zone of Kepler's M versus FGK Stars. Astron. J., 166(6), 234.
Abstract: Reliable detections of Earthsized planets in the habitable zone remain elusive in the Kepler sample, even for M dwarfs. The Kepler sample was once thought to contain a considerable number of Mdwarf stars ( Teff < 4000 K), which hosted enough Earthsized ([0.5, 1.5] Rcircle plus) planets to estimate their occurrence rate (eta(circle plus)) in the habitable zone. However, updated stellar properties from Gaia have shifted many Kepler stars to earlier spectral type classifications, with most stars (and their planets) now measured to be larger and hotter than previously believed. Today, only one partially reliable Earthsized candidate remains in the optimistic habitable zone, and zero in the conservative zone. Here we performed a new investigation of Kepler's Earthsized planets orbiting Mdwarf stars, using occurrence rate models with considerations of updated parameters and candidate reliability. Extrapolating our models to low instellations, we found an occurrence rate of eta(circle plus) = 8.58( – 8.22 )(+ 17.94) % for the conservative habitable zone (and 14.22 ( 12.71) (+ 24.96 )% for the optimistic one), consistent with previous works when considering the large uncertainties. Comparing these estimates to those from similarly comprehensive studies of Sunlike stars, we found that the current Kepler sample does not offer evidence to support an increase in eta(circle plus) from FGK to M stars. While the Kepler sample is too sparse to resolve an occurrence trend between early and midtolate M dwarfs for Earthsized planets, studies including larger planets and/or data from the K2 and TESS missions are well suited to this task.

Bergsten, G. J., Pascucci, I., Mulders, G. D., Fernandes, R. B., & Koskinen, T. T. (2022). The Demographics of Kepler's Earths and SuperEarths into the Habitable Zone. Astron. J., 164(5), 190.
Abstract: Understanding the occurrence of Earthsized planets in the habitable zone of Sunlike stars is essential to the search for Earth analogs. Yet a lack of reliable Kepler detections for such planets has forced many estimates to be derived from the closein (2 < Porb < 100 days) population, whose radii may have evolved differently under the effect of atmospheric massloss mechanisms. In this work, we compute the intrinsic occurrence rates of closein superEarths (similar to 12 Rcircle plus and subNeptunes (similar to 23.5 Rcircle plus) for FGK stars (0.561.63 Mcircle dot) as a function of orbital period and find evidence of two regimes: where superEarths are more abundant at short orbital periods, and where subNeptunes are more abundant at longer orbital periods. We fit a parametric model in five equally populated stellar mass bins and find that the orbital period of transition between these two regimes scales with stellar mass, like Ptrans proportional to M*(1.7 +/ 0.2). Ptrans These results suggest a population of former subNeptunes contaminating the population of gigayearold closein superEarths, indicative of a population shaped by atmospheric loss. Using our model to constrain the longperiod population of intrinsically rocky planets, we estimate an occurrence rate of Gamma(circle plus) = 15(4)(+6)% for Earthsized habitable zone planets, and predict that subNeptunes may be similar to twice as common as superEarths in the habitable zone (when normalized over the natural logorbital period and radius range used). Finally, we discuss our results in the context of future missions searching for habitable zone planets.

Celis, P., de la Cruz, R., Fuentes, C., & Gomez, H. W. (2021). Survival and Reliability Analysis with an EpsilonPositive Family of Distributions with Applications. Symmetry, 13(5), 908.
Abstract: We introduce a new class of distributions called the epsilonpositive family, which can be viewed as generalization of the distributions with positive support. The construction of the epsilonpositive family is motivated by the ideas behind the generation of skew distributions using symmetric kernels. This new class of distributions has as special cases the exponential, Weibull, lognormal, loglogistic and gamma distributions, and it provides an alternative for analyzing reliability and survival data. An interesting feature of the epsilonpositive family is that it can viewed as a finite scale mixture of positive distributions, facilitating the derivation and implementation of EMtype algorithms to obtain maximum likelihood estimates (MLE) with (un)censored data. We illustrate the flexibility of this family to analyze censored and uncensored data using two real examples. One of them was previously discussed in the literature; the second one consists of a new application to model recidivism data of a group of inmates released from the Chilean prisons during 2007. The results show that this new family of distributions has a better performance fitting the data than some common alternatives such as the exponential distribution.

Dang, C., Wei, P. F., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Parallel adaptive Bayesian quadrature for rare event estimation. Reliab. Eng. Syst. Saf., 225, 108621.
Abstract: Various numerical methods have been extensively studied and used for reliability analysis over the past several decades. However, how to understand the effect of numerical uncertainty (i.e., numerical error due to the discretization of the performance function) on the failure probability is still a challenging issue. The active learning probabilistic integration (ALPI) method offers a principled approach to quantify, propagate and reduce the numerical uncertainty via computation within a Bayesian framework, which has not been fully investigated in context of probabilistic reliability analysis. In this study, a novel method termed `Parallel Adaptive Bayesian Quadrature' (PABQ) is proposed on the theoretical basis of ALPI, and is aimed at broadening its scope of application. First, the Monte Carlo method used in ALPI is replaced with an importance ball sampling technique so as to reduce the sample size that is needed for rare failure event estimation. Second, a multipoint selection criterion is proposed to enable parallel distributed processing. Four numerical examples are studied to demonstrate the effectiveness and efficiency of the proposed method. It is shown that PABQ can effectively assess small failure probabilities (e.g., as low as 10(7)) with a minimum number of iterations by taking advantage of parallel computing.

Jimenez, D., Barrera, J., & Cancela, H. (2023). Communication Network Reliability Under Geographically Correlated Failures Using Probabilistic Seismic Hazard Analysis. IEEE Access, 11, 31341–31354.
Abstract: The research community's attention has been attracted to the reliability of networks exposed to largescale disasters and this has become a critical concern in network studies during the last decade. Earthquakes are high on the list of those showing the most significant impacts on communication networks, and simultaneously, they are the least predictable events. This study uses the Probabilistic Seismic Hazard Analysis method to estimate the network element state after an earthquake. The approach considers a seismic source model and ground prediction equations to assess the intensity measure for each element according to its location. In the simulation, nodes fail according to the building's fragility curves. Similarly, links fail according to a failure rate depending on the intensity measure and the cable's characteristics. We use the sourceterminal, and the diameter constrained reliability metrics. The approach goes beyond the graph representation of the network and incorporates the terrain characteristics and the component's robustness into the network performance analysis at an affordable computational cost. We study the method on a network in a seismic region with almost 9000 km of optical fiber. We observed that for sourceterminal that are less than 500 km apart the improvements are marginals while for those that are more than 1000 km apart, reliability improves near a 30% in the enhanced designs. We also showed that these results depend heavily on the robustness/fragility of the infrastructure, showing that performance measures based only the network topology are not enough to evaluate new designs.

Matus, O., Barrera, J., Moreno, E., & Rubino, G. (2019). On the MarshallOlkin Copula Model for Network Reliability Under Dependent Failures. IEEE Trans. Reliab., 68(2), 451–461.
Abstract: The MarshallOlkin (MO) copulamodel has emerged as the standard tool for capturing dependence between components in failure analysis in reliability. In this model, shocks arise at exponential random times, that affect one or several components inducing a natural correlation in the failure process. However, because the number of parameter of the model grows exponentially with the number of components, MO suffers of the “curse of dimensionality.” MO models are usually intended to be applied to design a network before its construction; therefore, it is natural to assume that only partial information about failure behavior can be gathered, mostly from similar existing networks. To construct such an MO model, we propose an optimization approach to define the shock's parameters in the MO copula, in order to match marginal failures probabilities and correlations between these failures. To deal with the exponential number of parameters of this problem, we use a columngeneration technique. We also discuss additional criteria that can be incorporated to obtain a suitable model. Our computational experiments show that the resulting MO model produces a close estimation of the network reliability, especially when the correlation between component failures is significant.

Ramos, D., Moreno, S., Canessa, E., Chaigneau, S. E., & Marchant, N. (2023). ACPLT: An algorithm for computerassisted coding of semantic property listing data. Behav. Res. Methods, Early Access.
Abstract: In this paper, we present a novel algorithm that uses machine learning and natural language processing techniques to facilitate the coding of feature listing data. Feature listing is a method in which participants are asked to provide a list of features that are typically true of a given concept or word. This method is commonly used in research studies to gain insights into people's understanding of various concepts. The standard procedure for extracting meaning from feature listings is to manually code the data, which can be timeconsuming and prone to errors, leading to reliability concerns. Our algorithm aims at addressing these challenges by automatically assigning humancreated codes to feature listing data that achieve a quantitatively good agreement with human coders. Our preliminary results suggest that our algorithm has the potential to improve the efficiency and accuracy of content analysis of feature listing data. Additionally, this tool is an important step toward developing a fully automated coding algorithm, which we are currently preliminarily devising.

Yuan, X. K., Faes, M. G. R., Liu, S. L., Valdebenito, M. A., & Beer, M. (2021). Efficient imprecise reliability analysis using the Augmented Space Integral. Reliab. Eng. Syst. Saf., 210, 107477.
Abstract: This paper presents an efficient approach to compute the bounds on the reliability of a structure subjected to uncertain parameters described by means of imprecise probabilities. These imprecise probabilities arise from epistemic uncertainty in the definition of the hyperparameters of a set of random variables that describe aleatory uncertainty in some of the structure's properties. Typically, such calculation involves the solution of a socalled doubleloop problem, where a crisp reliability problem is repeatedly solved to determine which realization of the epistemic uncertainties yields the worst or best case with respect to structural safety. The approach in this paper aims at decoupling this double loop by virtue of the Augmented Space Integral. The core idea of the method is to infer a functional relationship between the epistemically uncertain hyperparameters and the probability of failure. Then, this functional relationship can be used to determine the best and worst case behavior with respect to the probability of failure. Three case studies are included to illustrate the effectiveness and efficiency of the developed methods.

Yuan, X. K., Liu, S. L., Valdebenito, M. A., Faes, M. G. R., Jerez, D. J., Jensen, H. A., et al. (2021). Decoupled reliabilitybased optimization using Markov chain Monte Carlo in augmented space. Adv. Eng. Softw., 157, 103020.
Abstract: An efficient framework is proposed for reliabilitybased design optimization (RBDO) of structural systems. The RBDO problem is expressed in terms of the minimization of the failure probability with respect to design variables which correspond to distribution parameters of random variables, e.g. mean or standard deviation. Generally, this problem is quite demanding from a computational viewpoint, as repeated reliability analyses are involved. Hence, in this contribution, an efficient framework for solving a class of RBDO problems without even a single reliability analysis is proposed. It makes full use of an established functional relationship between the probability of failure and the distribution design parameters, which is termed as the failure probability function (FPF). By introducing an instrumental variability associated with the distribution design parameters, the target FPF is found to be proportional to a posterior distribution of the design parameters conditional on the occurrence of failure in an augmented space. This posterior distribution is derived and expressed as an integral, which can be estimated through simulation. An advanced Markov chain algorithm is adopted to efficiently generate samples that follow the aforementioned posterior distribution. Also, an algorithm that reuses information is proposed in combination with sequential approximate optimization to improve the efficiency. Numeric examples illustrate the performance of the proposed framework.

Yuan, X. K., Liu, S. L., Valdebenito, M. A., Gu, J., & Beer, M. (2021). Efficient procedure for failure probability function estimation in augmented space. Struct. Saf., 92, 102104.
Abstract: An efficient procedure is proposed to estimate the failure probability function (FPF) with respect to design variables, which correspond to distribution parameters of basic structural random variables. The proposed procedure is based on the concept of an augmented reliability problem, which assumes the design variables as uncertain by assigning a prior distribution, transforming the FPF into an expression that includes the posterior distribution of those design variables. The novel contribution of this work consists of expressing this target posterior distribution as an integral, allowing it to be estimated by means of sampling, and no distribution fitting is needed, leading to an efficient estimation of FPF. The proposed procedure is implemented within three different simulation strategies: Monte Carlo simulation, importance sampling and subset simulation; for each of these cases, expressions for the coefficient of variation of the FPF estimate are derived. Numerical examples illustrate performance of the proposed approaches.

Zhou, C. C., Zhang, H. L., Valdebenito, M. A., & Zhao, H. D. (2022). A general hierarchical ensemblelearning framework for structural reliability analysis. Reliab. Eng. Syst. Saf., 225, 108605.
Abstract: Existing ensemblelearning methods for reliability analysis are usually developed by combining ensemble learning with a learning function. A commonly used strategy is to construct the initial training set and the test set in advance. The training set is used to train the initial ensemble model, while the test set is adopted to allocate weight factors and check the convergence criterion. Reliability analysis focuses more on the local prediction accuracy near the limit state surface than the global prediction accuracy in the entire space. However, samples in the initial training set and the test set are generally randomly generated, which will result in the learning function failing to find the real ???best??? update samples and the allocation of weight factors may be suboptimal or even unreasonable. These two points have a detrimental impact on the overall performance of the ensemble model. Thus, we propose a general hierarchical ensemblelearning framework (ELF) for reliability analysis, which consists of twolayer models and three different phases. A novel method called CESMELF is proposed by embedding the classical ensemble of surrogate models (CESM) in the proposed ELF. Four examples are investigated to show that CESMELF outperforms CESM in prediction accuracy and is more efficient in some cases.
