|
Araujo, J., Ducoffe, G., Nisse, N., & Suchan, K. (2018). On interval number in cycle convexity. Discret. Math. Theor. Comput. Sci., 20(1), 35 pp.
Abstract: Recently, Araujo et al. [Manuscript in preparation, 2017] introduced the notion of Cycle Convexity of graphs. In their seminal work, they studied the graph convexity parameter called hull number for this new graph convexity they proposed, and they presented some of its applications in Knot theory. Roughly, the tunnel number of a knot embedded in a plane is upper bounded by the hull number of a corresponding planar 4-regular graph in cycle convexity. In this paper, we go further in the study of this new graph convexity and we study the interval number of a graph in cycle convexity. This parameter is, alongside the hull number, one of the most studied parameters in the literature about graph convexities. Precisely, given a graph G, its interval number in cycle convexity, denoted by in(cc)(G), is the minimum cardinality of a set S subset of V (G) such that every vertex w is an element of E V (G) \ S has two distinct neighbors u, v is an element of S such that u and v lie in same connected component of G[S], i.e. the subgraph of G induced by the vertices in S. In this work, first we provide bounds on in(cc) (G) and its relations to other graph convexity parameters, and explore its behaviour on grids. Then, we present some hardness results by showing that deciding whetherin(cc) (G) <= k is NP-complete, even if G is a split graph or a bounded-degree planar graph, and that the problem is W[2]-hard in bipartite graphs when k is the parameter. As a consequence, we obtain that in(cc) (G) cannot be approximated up to a constant factor in the classes of split graphs and bipartite graphs (unless P = NP). On the positive side, we present polynomial-time algorithms to compute in(cc) (G) for outerplanar graphs, cobipartite graphs and interval graphs. We also present fixed-parameter tractable (FPT) algorithms to compute it for (q, q – 4)-graphs when q is the parameter and for general graphs G when parameterized either by the treewidth or the neighborhood diversity of G. Some of our hardness results and positive results are not known to hold for related graph convexities and domination problems. We hope that the design of our new reductions and polynomial-time algorithms can be helpful in order to advance in the study of related graph problems.
|
|
|
Chang, Q., Zhou, C. C., Valdebenito, M. A., Liu, H. W., & Yue, Z. F. (2022). A novel sensitivity index for analyzing the response of numerical models with interval inputs. Comput. Methods in Appl. Mech. Eng., 400, 115509.
Abstract: This study proposes a novel sensitivity index to provide essential insights into numerical models whose inputs are characterized by intervals. Based on the interval model and its normalized form, the interval processes are introduced to define a new sensitivity index. The index can represent the individual or joint influence of the interval inputs on the output of a considered model. A double-loop strategy, based on global metamodeling and optimization, is established to calculate the index. Subsequently, the proposed index is theoretically compared with two other existing indices, and it is experimentally applied to three numerical examples and a practical engineering problem of a honeycomb sandwich radome. The results indicate that the proposed index is an effective tool for interval sensitivity analysis.
|
|
|
Dang, C., Wei, P. F., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Interval uncertainty propagation by a parallel Bayesian global optimization method. Appl. Math. Model., 108, 220–235.
Abstract: This paper is concerned with approximating the scalar response of a complex computational model subjected to multiple input interval variables. Such task is formulated as finding both the global minimum and maximum of a computationally expensive black-box function over a prescribed hyper-rectangle. On this basis, a novel non-intrusive method, called `triple-engine parallel Bayesian global optimization', is proposed. The method begins by assuming a Gaussian process prior (which can also be interpreted as a surrogate model) over the response function. The main contribution lies in developing a novel infill sampling criterion, i.e., triple-engine pseudo expected improvement strategy, to identify multiple promising points for minimization and/or maximization based on the past observations at each iteration. By doing so, these identified points can be evaluated on the real response function in parallel. Besides, another potential benefit is that both the lower and upper bounds of the model response can be obtained with a single run of the developed method. Four numerical examples with varying complexity are investigated to demonstrate the proposed method against some existing techniques, and results indicate that significant computational savings can be achieved by making full use of prior knowledge and parallel computing.
|
|
|
de la Cruz, R., Salinas, H. S., & Meza, C. (2022). Reliability Estimation for Stress-Strength Model Based on Unit-Half-Normal Distribution. Symmetry, 14(4), 837.
Abstract: Many lifetime distribution models have successfully served as population models for risk analysis and reliability mechanisms. We propose a novel estimation procedure of stress-strength reliability in the case of two independent unit-half-normal distributions can fit asymmetrical data with either positive or negative skew, with different shape parameters. We obtain the maximum likelihood estimator of the reliability, its asymptotic distribution, and exact and asymptotic confidence intervals. In addition, confidence intervals of model parameters are constructed by using bootstrap techniques. We study the performance of the estimators based on Monte Carlo simulations, the mean squared error, average bias and length, and coverage probabilities. Finally, we apply the proposed reliability model in data analysis of burr measurements on the iron sheets.
|
|
|
Faes, M. G. R., Valdebenito, M. A., Yuan, X. K., Wei, P. F., & Beer, M. (2021). Augmented reliability analysis for estimating imprecise first excursion probabilities in stochastic linear dynamics. Adv. Eng. Softw., 155, 102993.
Abstract: Imprecise probability allows quantifying the level of safety of a system taking into account the effect of both aleatory and epistemic uncertainty. The practical estimation of an imprecise probability is usually quite demanding from a numerical viewpoint, as it is necessary to propagate separately both types of uncertainty, leading in practical cases to a nested implementation in the so-called double loop approach. In view of this issue, this contribution presents an alternative approach that avoids the double loop by replacing the imprecise probability problem by an augmented, purely aleatory reliability analysis. Then, with the help of Bayes' theorem, it is possible to recover an expression for the failure probability as an explicit function of the imprecise parameters from the augmented reliability problem, which ultimately allows calculating the imprecise probability. The implementation of the proposed framework is investigated within the context of imprecise first excursion probability estimation of uncertain linear structures subject to imprecisely defined stochastic quantities and crisp stochastic loads. The associated augmented reliability problem is solved within the context of Directional Importance Sampling, leading to an improved accuracy at reduced numerical costs. The application of the proposed approach is investigated by means of two examples. The results obtained indicate that the proposed approach can be highly efficient and accurate.
|
|
|
Goles, E., Slapnicar, I., & Lardies, M. A. (2021). Universal Evolutionary Model for Periodical Species. Complexity, 2021, 2976351.
Abstract: Real-world examples of periodical species range from cicadas, whose life cycles are large prime numbers, like 13 or 17, to bamboos, whose periods are large multiples of small primes, like 40 or even 120. The periodicity is caused by interaction of species, be it a predator-prey relationship, symbiosis, commensalism, or competition exclusion principle. We propose a simple mathematical model, which explains and models all those principles, including listed extremal cases. This rather universal, qualitative model is based on the concept of a local fitness function, where a randomly chosen new period is selected if the value of the global fitness function of the species increases. Arithmetically speaking, the different interactions are related to only four principles: given a couple of integer periods either (1) their greatest common divisor is one, (2) one of the periods is prime, (3) both periods are equal, or (4) one period is an integer multiple of the other.
|
|
|
Leite, D., Skrjanc, I., Blazic, S., Zdesar, A., & Gomide, F. (2023). Interval incremental learning of interval data streams and application to vehicle tracking. Inf. Sci., 630, 1–22.
Abstract: This paper presents a method called Interval Incremental Learning (IIL) to capture spatial and temporal patterns in uncertain data streams. The patterns are represented by information granules and a granular rule base with the purpose of developing explainable human-centered computational models of virtual and physical systems. Fundamentally, interval data are either included into wider and more meaningful information granules recursively, or used for structural adaptation of the rule base. An Uncertainty-Weighted Recursive-Least-Squares (UW-RLS) method is proposed to update affine local functions associated with the rules. Online recursive procedures that build interval-based models from scratch and guarantee balanced information granularity are described. The procedures assure stable and understandable rule-based modeling. In general, the model can play the role of a predictor, a controller, or a classifier, with online sample-per-sample structural adaptation and parameter estimation done concurrently. The IIL method is aligned with issues and needs of the Internet of Things, Big Data processing, and eXplainable Artificial Intelligence. An application example concerning real-time land-vehicle localization and tracking in an uncertain environment illustrates the usefulness of the method. We also provide the Driving Through Manhattan interval dataset to foster future investigation.
|
|
|
Leiva, V., Tejo, M., Guiraud, P., Schmachtenberg, O., Orio, P., & Marmolejo-Ramos, F. (2015). Modeling neural activity with cumulative damage distributions. Biol. Cybern., 109(4-5), 421–433.
Abstract: Neurons transmit information as action potentials or spikes. Due to the inherent randomness of the inter-spike intervals (ISIs), probabilistic models are often used for their description. Cumulative damage (CD) distributions are a family of probabilistic models that has been widely considered for describing time-related cumulative processes. This family allows us to consider certain deterministic principles for modeling ISIs from a probabilistic viewpoint and to link its parameters to values with biological interpretation. The CD family includes the Birnbaum-Saunders and inverse Gaussian distributions, which possess distinctive properties and theoretical arguments useful for ISI description. We expand the use of CD distributions to the modeling of neural spiking behavior, mainly by testing the suitability of the Birnbaum-Saunders distribution, which has not been studied in the setting of neural activity. We validate this expansion with original experimental and simulated electrophysiological data.
|
|
|
Osorio-Valenzuela, L., Pereira, J., Quezada, F., & Vasquez, O. C. (2019). Minimizing the number of machines with limited workload capacity for scheduling jobs with interval constraints. Appl. Math. Model., 74, 512–527.
Abstract: In this paper, we consider a parallel machine scheduling problem in which machines have a limited workload capacity and jobs have deadlines and release dates. The problem is motivated by the operation of energy storage management systems for microgrids under emergency conditions and generalizes some problems that have already been studied in the literature for their theoretical value. In this work, we propose heuristic and exact algorithms to solve the problem. The heuristics are adaptations of classical bin packing heuristics in which additional conditions on the feasibility of a solution are imposed, whereas the exact method is a branch-and-price approach. The results show that the branch-andprice approach is able to optimally solve random instances with up to 250 jobs within a time limit of one hour, while the heuristic procedures provide near optimal solution within reduced running times. Finally, we also provide additional complexity results for a special case of the problem. (C) 2019 Elsevier Inc. All rights reserved.
|
|
|
Yuan, X. K., Faes, M. G. R., Liu, S. L., Valdebenito, M. A., & Beer, M. (2021). Efficient imprecise reliability analysis using the Augmented Space Integral. Reliab. Eng. Syst. Saf., 210, 107477.
Abstract: This paper presents an efficient approach to compute the bounds on the reliability of a structure subjected to uncertain parameters described by means of imprecise probabilities. These imprecise probabilities arise from epistemic uncertainty in the definition of the hyper-parameters of a set of random variables that describe aleatory uncertainty in some of the structure's properties. Typically, such calculation involves the solution of a so-called double-loop problem, where a crisp reliability problem is repeatedly solved to determine which realization of the epistemic uncertainties yields the worst or best case with respect to structural safety. The approach in this paper aims at decoupling this double loop by virtue of the Augmented Space Integral. The core idea of the method is to infer a functional relationship between the epistemically uncertain hyper-parameters and the probability of failure. Then, this functional relationship can be used to determine the best and worst case behavior with respect to the probability of failure. Three case studies are included to illustrate the effectiveness and efficiency of the developed methods.
|
|