|
Barrera, J., & Lagos, G. (2020). Limit distributions of the upper order statistics for the Levy-frailty Marshall-Olkin distribution. Extremes, 23, 603–628.
Abstract: The Marshall-Olkin (MO) distribution is considered a key model in reliability theory and in risk analysis, where it is used to model the lifetimes of dependent components or entities of a system and dependency is induced by “shocks” that hit one or more components at a time. Of particular interest is the Levy-frailty subfamily of the Marshall-Olkin (LFMO) distribution, since it has few parameters and because the nontrivial dependency structure is driven by an underlying Levy subordinator process. The main contribution of this work is that we derive the precise asymptotic behavior of the upper order statistics of the LFMO distribution. More specifically, we consider a sequence ofnunivariate random variables jointly distributed as a multivariate LFMO distribution and analyze the order statistics of the sequence asngrows. Our main result states that if the underlying Levy subordinator is in the normal domain of attraction of a stable distribution with index of stability alpha then, after certain logarithmic centering and scaling, the upper order statistics converge in distribution to a stable distribution if alpha> 1 or a simple transformation of it if alpha <= 1. Our result can also give easily computable confidence intervals for the last failure times, provided that a proper convergence analysis is carried out first.
|
|
|
Barrera, J., Moreno, E., Munoz, G., & Romero, P. (2022). Exact reliability optimization for series-parallel graphs using convex envelopes. Networks, 80(2), 235–248.
Abstract: Given its wide spectrum of applications, the classical problem of all-terminal network reliability evaluation remains a highly relevant problem in network design. The associated optimization problem-to find a network with the best possible reliability under multiple constraints-presents an even more complex challenge, which has been addressed in the scientific literature but usually under strong assumptions over failures probabilities and/or the network topology. In this work, we propose a novel reliability optimization framework for network design with failures probabilities that are independent but not necessarily identical. We leverage the linear-time evaluation procedure for network reliability in the series-parallel graphs of Satyanarayana and Wood (1985) to formulate the reliability optimization problem as a mixed-integer nonlinear optimization problem. To solve this nonconvex problem, we use classical convex envelopes of bilinear functions, introduce custom cutting planes, and propose a new family of convex envelopes for expressions that appear in the evaluation of network reliability. Furthermore, we exploit the refinements produced by spatial branch-and-bound to locally strengthen our convex relaxations. Our experiments show that, using our framework, one can efficiently obtain optimal solutions in challenging instances of this problem.
|
|
|
Beck, A. T., Ribeiro, L. D., Valdebenito, M., & Jensen, H. (2022). Risk-Based Design of Regular Plane Frames Subject to Damage by Abnormal Events: A Conceptual Study. J. Struct. Eng., 148(1), 04021229.
Abstract: Constructed facilities should be robust with respect to the loss of load-bearing elements due to abnormal events. Yet, strengthening structures to withstand such damage has a significant impact on construction costs. Strengthening costs should be justified by the threat and should result in smaller expected costs of progressive collapse. In regular frame structures, beams and columns compete for the strengthening budget. In this paper, we present a risk-based formulation to address the optimal design of regular plane frames under element loss conditions. We address the threat probabilities for which strengthening has better cost-benefit than usual design, for different frame configurations, and study the impacts of strengthening extent and cost. The risk-based optimization reveals optimum points of compromise between competing failure modes: local bending of beams, local crushing of columns, and global pancake collapse, for frames of different aspect ratios. The conceptual study is based on a simple analytical model for progressive collapse, but it provides relevant insight for the design and strengthening of real structures.
|
|
|
Bergsten, G. J., Pascucci, I., Hardegree-Ullman, K. K., Fernandes, R. B., Christiansen, J. L., & Mulders, G. D. (2023). No Evidence for More Earth-sized Planets in the Habitable Zone of Kepler's M versus FGK Stars. Astron. J., 166(6), 234.
Abstract: Reliable detections of Earth-sized planets in the habitable zone remain elusive in the Kepler sample, even for M dwarfs. The Kepler sample was once thought to contain a considerable number of M-dwarf stars ( T-eff < 4000 K), which hosted enough Earth-sized ([0.5, 1.5] R-circle plus) planets to estimate their occurrence rate (eta(circle plus)) in the habitable zone. However, updated stellar properties from Gaia have shifted many Kepler stars to earlier spectral type classifications, with most stars (and their planets) now measured to be larger and hotter than previously believed. Today, only one partially reliable Earth-sized candidate remains in the optimistic habitable zone, and zero in the conservative zone. Here we performed a new investigation of Kepler's Earth-sized planets orbiting M-dwarf stars, using occurrence rate models with considerations of updated parameters and candidate reliability. Extrapolating our models to low instellations, we found an occurrence rate of eta(circle plus) = 8.58( – 8.22 )(+ 17.94) % for the conservative habitable zone (and 14.22 (- 12.71) (+ 24.96 )% for the optimistic one), consistent with previous works when considering the large uncertainties. Comparing these estimates to those from similarly comprehensive studies of Sun-like stars, we found that the current Kepler sample does not offer evidence to support an increase in eta(circle plus) from FGK to M stars. While the Kepler sample is too sparse to resolve an occurrence trend between early and mid-to-late M dwarfs for Earth-sized planets, studies including larger planets and/or data from the K2 and TESS missions are well suited to this task.
|
|
|
Bergsten, G. J., Pascucci, I., Mulders, G. D., Fernandes, R. B., & Koskinen, T. T. (2022). The Demographics of Kepler's Earths and Super-Earths into the Habitable Zone. Astron. J., 164(5), 190.
Abstract: Understanding the occurrence of Earth-sized planets in the habitable zone of Sun-like stars is essential to the search for Earth analogs. Yet a lack of reliable Kepler detections for such planets has forced many estimates to be derived from the close-in (2 < P-orb < 100 days) population, whose radii may have evolved differently under the effect of atmospheric mass-loss mechanisms. In this work, we compute the intrinsic occurrence rates of close-in super-Earths (similar to 1-2 R-circle plus and sub-Neptunes (similar to 2-3.5 R-circle plus) for FGK stars (0.56-1.63 M-circle dot) as a function of orbital period and find evidence of two regimes: where super-Earths are more abundant at short orbital periods, and where sub-Neptunes are more abundant at longer orbital periods. We fit a parametric model in five equally populated stellar mass bins and find that the orbital period of transition between these two regimes scales with stellar mass, like P-trans proportional to M-*(1.7 +/- 0.2). Ptrans These results suggest a population of former sub-Neptunes contaminating the population of gigayear-old close-in super-Earths, indicative of a population shaped by atmospheric loss. Using our model to constrain the long-period population of intrinsically rocky planets, we estimate an occurrence rate of Gamma(circle plus) = 15(-4)(+6)% for Earth-sized habitable zone planets, and predict that sub-Neptunes may be similar to twice as common as super-Earths in the habitable zone (when normalized over the natural log-orbital period and radius range used). Finally, we discuss our results in the context of future missions searching for habitable zone planets.
|
|
|
Celis, P., de la Cruz, R., Fuentes, C., & Gomez, H. W. (2021). Survival and Reliability Analysis with an Epsilon-Positive Family of Distributions with Applications. Symmetry, 13(5), 908.
Abstract: We introduce a new class of distributions called the epsilon-positive family, which can be viewed as generalization of the distributions with positive support. The construction of the epsilon-positive family is motivated by the ideas behind the generation of skew distributions using symmetric kernels. This new class of distributions has as special cases the exponential, Weibull, log-normal, log-logistic and gamma distributions, and it provides an alternative for analyzing reliability and survival data. An interesting feature of the epsilon-positive family is that it can viewed as a finite scale mixture of positive distributions, facilitating the derivation and implementation of EM-type algorithms to obtain maximum likelihood estimates (MLE) with (un)censored data. We illustrate the flexibility of this family to analyze censored and uncensored data using two real examples. One of them was previously discussed in the literature; the second one consists of a new application to model recidivism data of a group of inmates released from the Chilean prisons during 2007. The results show that this new family of distributions has a better performance fitting the data than some common alternatives such as the exponential distribution.
|
|
|
Dang, C., Wei, P. F., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Parallel adaptive Bayesian quadrature for rare event estimation. Reliab. Eng. Syst. Saf., 225, 108621.
Abstract: Various numerical methods have been extensively studied and used for reliability analysis over the past several decades. However, how to understand the effect of numerical uncertainty (i.e., numerical error due to the discretization of the performance function) on the failure probability is still a challenging issue. The active learning probabilistic integration (ALPI) method offers a principled approach to quantify, propagate and reduce the numerical uncertainty via computation within a Bayesian framework, which has not been fully investigated in context of probabilistic reliability analysis. In this study, a novel method termed `Parallel Adaptive Bayesian Quadrature' (PABQ) is proposed on the theoretical basis of ALPI, and is aimed at broadening its scope of application. First, the Monte Carlo method used in ALPI is replaced with an importance ball sampling technique so as to reduce the sample size that is needed for rare failure event estimation. Second, a multi-point selection criterion is proposed to enable parallel distributed processing. Four numerical examples are studied to demonstrate the effectiveness and efficiency of the proposed method. It is shown that PABQ can effectively assess small failure probabilities (e.g., as low as 10(-7)) with a minimum number of iterations by taking advantage of parallel computing.
|
|
|
Jimenez, D., Barrera, J., & Cancela, H. (2023). Communication Network Reliability Under Geographically Correlated Failures Using Probabilistic Seismic Hazard Analysis. IEEE Access, 11, 31341–31354.
Abstract: The research community's attention has been attracted to the reliability of networks exposed to large-scale disasters and this has become a critical concern in network studies during the last decade. Earthquakes are high on the list of those showing the most significant impacts on communication networks, and simultaneously, they are the least predictable events. This study uses the Probabilistic Seismic Hazard Analysis method to estimate the network element state after an earthquake. The approach considers a seismic source model and ground prediction equations to assess the intensity measure for each element according to its location. In the simulation, nodes fail according to the building's fragility curves. Similarly, links fail according to a failure rate depending on the intensity measure and the cable's characteristics. We use the source-terminal, and the diameter constrained reliability metrics. The approach goes beyond the graph representation of the network and incorporates the terrain characteristics and the component's robustness into the network performance analysis at an affordable computational cost. We study the method on a network in a seismic region with almost 9000 km of optical fiber. We observed that for source-terminal that are less than 500 km apart the improvements are marginals while for those that are more than 1000 km apart, reliability improves near a 30% in the enhanced designs. We also showed that these results depend heavily on the robustness/fragility of the infrastructure, showing that performance measures based only the network topology are not enough to evaluate new designs.
|
|
|
Matus, O., Barrera, J., Moreno, E., & Rubino, G. (2019). On the Marshall-Olkin Copula Model for Network Reliability Under Dependent Failures. IEEE Trans. Reliab., 68(2), 451–461.
Abstract: The Marshall-Olkin (MO) copulamodel has emerged as the standard tool for capturing dependence between components in failure analysis in reliability. In this model, shocks arise at exponential random times, that affect one or several components inducing a natural correlation in the failure process. However, because the number of parameter of the model grows exponentially with the number of components, MO suffers of the “curse of dimensionality.” MO models are usually intended to be applied to design a network before its construction; therefore, it is natural to assume that only partial information about failure behavior can be gathered, mostly from similar existing networks. To construct such an MO model, we propose an optimization approach to define the shock's parameters in the MO copula, in order to match marginal failures probabilities and correlations between these failures. To deal with the exponential number of parameters of this problem, we use a column-generation technique. We also discuss additional criteria that can be incorporated to obtain a suitable model. Our computational experiments show that the resulting MO model produces a close estimation of the network reliability, especially when the correlation between component failures is significant.
|
|
|
Ramos, D., Moreno, S., Canessa, E., Chaigneau, S. E., & Marchant, N. (2023). AC-PLT: An algorithm for computer-assisted coding of semantic property listing data. Behav. Res. Methods, Early Access.
Abstract: In this paper, we present a novel algorithm that uses machine learning and natural language processing techniques to facilitate the coding of feature listing data. Feature listing is a method in which participants are asked to provide a list of features that are typically true of a given concept or word. This method is commonly used in research studies to gain insights into people's understanding of various concepts. The standard procedure for extracting meaning from feature listings is to manually code the data, which can be time-consuming and prone to errors, leading to reliability concerns. Our algorithm aims at addressing these challenges by automatically assigning human-created codes to feature listing data that achieve a quantitatively good agreement with human coders. Our preliminary results suggest that our algorithm has the potential to improve the efficiency and accuracy of content analysis of feature listing data. Additionally, this tool is an important step toward developing a fully automated coding algorithm, which we are currently preliminarily devising.
|
|
|
Yuan, X. K., Faes, M. G. R., Liu, S. L., Valdebenito, M. A., & Beer, M. (2021). Efficient imprecise reliability analysis using the Augmented Space Integral. Reliab. Eng. Syst. Saf., 210, 107477.
Abstract: This paper presents an efficient approach to compute the bounds on the reliability of a structure subjected to uncertain parameters described by means of imprecise probabilities. These imprecise probabilities arise from epistemic uncertainty in the definition of the hyper-parameters of a set of random variables that describe aleatory uncertainty in some of the structure's properties. Typically, such calculation involves the solution of a so-called double-loop problem, where a crisp reliability problem is repeatedly solved to determine which realization of the epistemic uncertainties yields the worst or best case with respect to structural safety. The approach in this paper aims at decoupling this double loop by virtue of the Augmented Space Integral. The core idea of the method is to infer a functional relationship between the epistemically uncertain hyper-parameters and the probability of failure. Then, this functional relationship can be used to determine the best and worst case behavior with respect to the probability of failure. Three case studies are included to illustrate the effectiveness and efficiency of the developed methods.
|
|
|
Yuan, X. K., Liu, S. L., Valdebenito, M. A., Faes, M. G. R., Jerez, D. J., Jensen, H. A., et al. (2021). Decoupled reliability-based optimization using Markov chain Monte Carlo in augmented space. Adv. Eng. Softw., 157, 103020.
Abstract: An efficient framework is proposed for reliability-based design optimization (RBDO) of structural systems. The RBDO problem is expressed in terms of the minimization of the failure probability with respect to design variables which correspond to distribution parameters of random variables, e.g. mean or standard deviation. Generally, this problem is quite demanding from a computational viewpoint, as repeated reliability analyses are involved. Hence, in this contribution, an efficient framework for solving a class of RBDO problems without even a single reliability analysis is proposed. It makes full use of an established functional relationship between the probability of failure and the distribution design parameters, which is termed as the failure probability function (FPF). By introducing an instrumental variability associated with the distribution design parameters, the target FPF is found to be proportional to a posterior distribution of the design parameters conditional on the occurrence of failure in an augmented space. This posterior distribution is derived and expressed as an integral, which can be estimated through simulation. An advanced Markov chain algorithm is adopted to efficiently generate samples that follow the aforementioned posterior distribution. Also, an algorithm that re-uses information is proposed in combination with sequential approximate optimization to improve the efficiency. Numeric examples illustrate the performance of the proposed framework.
|
|
|
Yuan, X. K., Liu, S. L., Valdebenito, M. A., Gu, J., & Beer, M. (2021). Efficient procedure for failure probability function estimation in augmented space. Struct. Saf., 92, 102104.
Abstract: An efficient procedure is proposed to estimate the failure probability function (FPF) with respect to design variables, which correspond to distribution parameters of basic structural random variables. The proposed procedure is based on the concept of an augmented reliability problem, which assumes the design variables as uncertain by assigning a prior distribution, transforming the FPF into an expression that includes the posterior distribution of those design variables. The novel contribution of this work consists of expressing this target posterior distribution as an integral, allowing it to be estimated by means of sampling, and no distribution fitting is needed, leading to an efficient estimation of FPF. The proposed procedure is implemented within three different simulation strategies: Monte Carlo simulation, importance sampling and subset simulation; for each of these cases, expressions for the coefficient of variation of the FPF estimate are derived. Numerical examples illustrate performance of the proposed approaches.
|
|
|
Zhou, C. C., Zhang, H. L., Valdebenito, M. A., & Zhao, H. D. (2022). A general hierarchical ensemble-learning framework for structural reliability analysis. Reliab. Eng. Syst. Saf., 225, 108605.
Abstract: Existing ensemble-learning methods for reliability analysis are usually developed by combining ensemble learning with a learning function. A commonly used strategy is to construct the initial training set and the test set in advance. The training set is used to train the initial ensemble model, while the test set is adopted to allocate weight factors and check the convergence criterion. Reliability analysis focuses more on the local prediction accuracy near the limit state surface than the global prediction accuracy in the entire space. However, samples in the initial training set and the test set are generally randomly generated, which will result in the learning function failing to find the real ???best??? update samples and the allocation of weight factors may be suboptimal or even unreasonable. These two points have a detrimental impact on the overall performance of the ensemble model. Thus, we propose a general hierarchical ensemble-learning framework (ELF) for reliability analysis, which consists of two-layer models and three different phases. A novel method called CESM-ELF is proposed by embedding the classical ensemble of surrogate models (CESM) in the proposed ELF. Four examples are investigated to show that CESM-ELF outperforms CESM in prediction accuracy and is more efficient in some cases.
|
|