Home | << 1 2 >> |
![]() |
Armstrong, M., Valencia, J., Lagos, G., & Emery, X. (2022). Constructing Branching Trees of Geostatistical Simulations. Math. Geosci., 54, 711–743.
Abstract: This paper proposes the use of multi-stage stochastic programming with recourse for optimised strategic open-pit mine planning. The key innovations are, firstly, that a branching tree of geostatistical simulations is developed to take account of uncertainty in ore grades, and secondly, scenario reduction techniques are applied to keep the trees to a manageable size. Our example shows that different mine plans would be optimal for the downside case when the deposit turns out to be of lower grade than expected compared to when it is of higher grade than expected. Our approach further provides th
|
Aylwin, R., Jerez-Hanckes, C., Schwab, C., & Zech, J. (2020). Domain Uncertainty Quantification in Computational Electromagnetics. SIAM-ASA J. Uncertain. Quantif., 8(1), 301–341.
Abstract: We study the numerical approximation of time-harmonic, electromagnetic fields inside a lossy cavity of uncertain geometry. Key assumptions are a possibly high-dimensional parametrization of the uncertain geometry along with a suitable transformation to a fixed, nominal domain. This uncertainty parametrization results in families of countably parametric, Maxwell-like cavity problems that are posed in a single domain, with inhomogeneous coefficients that possess finite, possibly low spatial regularity, but exhibit holomorphic parametric dependence in the differential operator. Our computational scheme is composed of a sparse grid interpolation in the high-dimensional parameter domain and an Hcurl -conforming edge element discretization of the parametric problem in the nominal domain. As a stepping-stone in the analysis, we derive a novel Strang-type lemma for Maxwell-like problems in the nominal domain, which is of independent interest. Moreover, we accommodate arbitrary small Sobolev regularity of the electric field and also cover uncertain isotropic constitutive or material laws. The shape holomorphy and edge-element consistency error analysis for the nominal problem are shown to imply convergence rates for multilevel Monte Carlo and for quasi-Monte Carlo integration, as well as sparse grid approximations, in uncertainty quantification for computational electromagnetics. They also imply expression rate estimates for deep ReLU networks of shape-to-solution maps in this setting. Finally, our computational experiments confirm the presented theoretical results.
|
Azar, M., Carrasco, R. A., & Mondschein, S. (2022). Dealing with Uncertain Surgery Times in Operating Room Scheduling. Eur. J. Oper. Res., 299(1), 377–394.
Abstract: The operating theater is one of the most expensive units in the hospital, representing up to 40% of the total expenses. Because of its importance, the operating room scheduling problem has been addressed from many different perspectives since the early 1960s. One of the main difficulties that
has reduced the applicability of the current results is the high variability in surgery duration, making schedule recommendations hard to implement. In this work, we propose a time-indexed scheduling formulation to solve the operational problem. Our main contribution is that we propose the use of chance constraints related to the surgery duration's probability distribution for each surgeon to improve the scheduling performance. We show how to implement these chance constraints as linear ones in our time-indexed formulation, enhancing the performance of the resulting schedules significantly. Through data analysis of real historical instances, we develop specific constraints that improve the schedule, reducing the need for overtime without affecting the utilization significantly. Furthermore, these constraints give the operating room manager the possibility of balancing overtime and utilization through a tunning parameter in our formulation. Finally, through simulations and the use of real instances, we report the performance for four different metrics, showing the importance of using historical data to get the right balance between the utilization and overtime. |
Bergen, M., & Munoz, F. D. (2018). Quantifying the effects of uncertain climate and environmental policies on investments and carbon emissions: A case study of Chile. Energy Econ., 75, 261–273.
Abstract: In this article we quantify the effect of uncertainty of climate and environmental policies on transmission and generation investments, as well as on CO2 emissions in Chile. We use a two-stage stochastic planning model with recourse or corrective investment options to find optimal portfolios of infrastructure both under perfect information and uncertainty. Under a series of assumptions, this model is equivalent to the equilibrium of a much more complicated bi-level market model, where a transmission planner chooses investments first and generation firms invest afterwards. We find that optimal investment strategies present important differences depending on the policy scenario. By changing our assumption of how agents will react to this uncertainty we compute bounds on the cost that this uncertainty imposes on the system, which we estimate ranges between 3.2% and 5.7% of the minimum expected system cost of $57.6B depending on whether agents will consider or not uncertainty when choosing investments. We also find that, if agents choose investments using a stochastic planning model, uncertain climate policies can result in nearly 18% more CO2 emissions than the equilibrium levels observed under perfect information. Our results highlight the importance of credible and stable long-term regulations for investors in the electric power industry if the goal is to achieve climate and environmental targets in the most cost-effective manner and to minimize the risk of asset stranding. (C) 2018 Elsevier B.V. All rights reserved.
|
Chadwick, C., Gironas, J., Gonzalez-Leiva, F., & Aedo, S. (2023). Bias adjustment to preserve changes in variability: the unbiased mapping of GCM changes. Hydrol. Sci., Early Access.
Abstract: Standard quantile mapping (QM) performs well, as a bias adjustment method, in removing historical climate biases, but it can significantly alter a global climate model (GCM) signal. Methods that do incorporate GCM changes commonly consider mean changes only. Quantile delta mapping (QDM) is an exception, as it explicitly preserves relative changes in the quantiles, but it might present biases in preserving GCMs changes in standard deviation. In this work we propose the unbiased quantile mapping (UQM) method, which by construction preserves GCM changes of the mean and the standard deviation. Synthetic experiments and four Chilean locations are used to compare the performance of UQM against QDM, QM, detrended quantile mapping, and scale distribution mapping. All the methods outperform QM, but a tradeoff exists between preserving the GCM relative changes in the quantiles (QDM is recommended in this case), or changes in the GCM moments (UQM is recommended in this case).
Keywords: climate change; uncertainty; GCM; bias adjustment; quantile mapping
|
Chang, Q., Zhou, C. C., Valdebenito, M. A., Liu, H. W., & Yue, Z. F. (2022). A novel sensitivity index for analyzing the response of numerical models with interval inputs. Comput. Methods in Appl. Mech. Eng., 400, 115509.
Abstract: This study proposes a novel sensitivity index to provide essential insights into numerical models whose inputs are characterized by intervals. Based on the interval model and its normalized form, the interval processes are introduced to define a new sensitivity index. The index can represent the individual or joint influence of the interval inputs on the output of a considered model. A double-loop strategy, based on global metamodeling and optimization, is established to calculate the index. Subsequently, the proposed index is theoretically compared with two other existing indices, and it is experimentally applied to three numerical examples and a practical engineering problem of a honeycomb sandwich radome. The results indicate that the proposed index is an effective tool for interval sensitivity analysis.
Keywords: Sensitivity analysis; Interval; Uncertainty; Model; Global meta; modeling
|
Dang, C., Valdebenito, M. A., Faes, M. G. R., Wei, P. F., & Beer, M. (2022). Structural reliability analysis: A Bayesian perspective. Struct. Saf., 99, 102259.
Abstract: Numerical methods play a dominant role in structural reliability analysis, and the goal has long been to produce a failure probability estimate with a desired level of accuracy using a minimum number of performance function evaluations. In the present study, we attempt to offer a Bayesian perspective on the failure probability integral estimation, as opposed to the classical frequentist perspective. For this purpose, a principled Bayesian Failure Probability Inference (BFPI) framework is first developed, which allows to quantify, propagate and reduce numerical uncertainty behind the failure probability due to discretization error. Especially, the posterior variance of the failure probability is derived in a semi-analytical form, and the Gaussianity of the posterior failure probability distribution is investigated numerically. Then, a Parallel Adaptive-Bayesian Failure Probability Learning (PA-BFPL) method is proposed within the Bayesian framework. In the PA-BFPL method, a variance-amplified importance sampling technique is presented to evaluate the posterior mean and variance of the failure probability, and an adaptive parallel active learning strategy is proposed to identify multiple updating points at each iteration. Thus, a novel advantage of PA-BFPL is that both prior knowledge and parallel computing can be used to make inference about the failure probability. Four numerical examples are investigated, indicating the potential benefits by advocating a Bayesian approach to failure probability estimation.
|
Dang, C., Wei, P. F., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Interval uncertainty propagation by a parallel Bayesian global optimization method. Appl. Math. Model., 108, 220–235.
Abstract: This paper is concerned with approximating the scalar response of a complex computational model subjected to multiple input interval variables. Such task is formulated as finding both the global minimum and maximum of a computationally expensive black-box function over a prescribed hyper-rectangle. On this basis, a novel non-intrusive method, called `triple-engine parallel Bayesian global optimization', is proposed. The method begins by assuming a Gaussian process prior (which can also be interpreted as a surrogate model) over the response function. The main contribution lies in developing a novel infill sampling criterion, i.e., triple-engine pseudo expected improvement strategy, to identify multiple promising points for minimization and/or maximization based on the past observations at each iteration. By doing so, these identified points can be evaluated on the real response function in parallel. Besides, another potential benefit is that both the lower and upper bounds of the model response can be obtained with a single run of the developed method. Four numerical examples with varying complexity are investigated to demonstrate the proposed method against some existing techniques, and results indicate that significant computational savings can be achieved by making full use of prior knowledge and parallel computing.
|
Dang, C., Wei, P. F., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Parallel adaptive Bayesian quadrature for rare event estimation. Reliab. Eng. Syst. Saf., 225, 108621.
Abstract: Various numerical methods have been extensively studied and used for reliability analysis over the past several decades. However, how to understand the effect of numerical uncertainty (i.e., numerical error due to the discretization of the performance function) on the failure probability is still a challenging issue. The active learning probabilistic integration (ALPI) method offers a principled approach to quantify, propagate and reduce the numerical uncertainty via computation within a Bayesian framework, which has not been fully investigated in context of probabilistic reliability analysis. In this study, a novel method termed `Parallel Adaptive Bayesian Quadrature' (PABQ) is proposed on the theoretical basis of ALPI, and is aimed at broadening its scope of application. First, the Monte Carlo method used in ALPI is replaced with an importance ball sampling technique so as to reduce the sample size that is needed for rare failure event estimation. Second, a multi-point selection criterion is proposed to enable parallel distributed processing. Four numerical examples are studied to demonstrate the effectiveness and efficiency of the proposed method. It is shown that PABQ can effectively assess small failure probabilities (e.g., as low as 10(-7)) with a minimum number of iterations by taking advantage of parallel computing.
|
Dölz, J., Harbrecht, H., Jerez-Hanckes, C., & Multerer M. (2022). Isogeometric multilevel quadrature for forward and inverse random acoustic scattering. Comput. Methods in Appl. Mech. Eng., 388, 114242.
Abstract: We study the numerical solution of forward and inverse time-harmonic acoustic scattering problems by randomly shaped obstacles in three-dimensional space using a fast isogeometric boundary element method. Within the isogeometric framework, realizations of the random scatterer can efficiently be computed by simply updating the NURBS mappings which represent the scatterer. This way, we end up with a random deformation field. In particular, we show that it suffices to know the deformation field’s expectation and covariance at the scatterer’s boundary to model the surface’s Karhunen–Loève expansion. Leveraging on the isogeometric framework, we employ multilevel quadrature methods to approximate quantities of interest such as the scattered wave’s expectation and variance. By computing the wave’s Cauchy data at an artificial, fixed interface enclosing the random obstacle, we can also directly infer quantities of interest in free space. Adopting the Bayesian paradigm, we finally compute the expected shape and variance of the scatterer from noisy measurements of the scattered wave at the artificial interface. Numerical results for the forward and inverse problems validate the proposed approach.
|
Escapil-Inchauspe, P., & Jerez-Hanckes, C. (2020). Helmholtz Scattering by Random Domains: First-Order Sparse Boundary Elements Approximation. SIAM J. Sci. Comput., 42(5), A2561–A2592.
Abstract: We consider the numerical solution of time-harmonic acoustic scattering by obstacles with uncertain geometries for Dirichlet, Neumann, impedance, and transmission boundary conditions. In particular, we aim to quantify diffracted fields originated by small stochastic perturbations of a given relatively smooth nominal shape. Using first-order shape Taylor expansions, we derive tensor deterministic first-kind boundary integral equations for the statistical moments of the scattering problems considered. These are then approximated by sparse tensor Galerkin discretizations via the combination technique [M. Griebel, M. Schneider, and C. Zenger, A combination technique for the solution of sparse grid problems, in Iterative Methods in Linear Algebra, P. de Groen and P. Beauwens, eds., Elsevier, Amsterdam, 1992, pp. 263-281; H. Harbrecht, M. Peters, and M. Siebenmorgen, J. Comput. Phys., 252 (2013), pp. 128-141]. We supply extensive numerical experiments confirming the predicted error convergence rates with polylogarithmic growth in the number of degrees of freedom and accuracy in approximation of the moments. Moreover, we discuss implementation details such as preconditioning to finally point out further research avenues.
|
Fina, M., Lauff, C., Faes, M. G. R., Valdebenito, M. A., Wagner, W., & Freitag, S. (2023). Bounding imprecise failure probabilities in structural mechanics based on maximum standard deviation. Struct. Saf., 101, 102293.
Abstract: This paper proposes a framework to calculate the bounds on failure probability of linear structural systems whose performance is affected by both random variables and interval variables. This kind of problems is known to be very challenging, as it demands coping with aleatoric and epistemic uncertainty explicitly. Inspired by the framework of the operator norm theorem, it is proposed to consider the maximum standard deviation of the structural response as a proxy for detecting the crisp values of the interval parameters, which yield the bounds of the failure probability. The scope of application of the proposed approach comprises linear structural systems, whose properties may be affected by both aleatoric and epistemic uncertainty and that are subjected to (possibly imprecise) Gaussian loading. Numerical examples indicate that the application of such proxy leads to substantial numerical advantages when compared to a traditional double-loop approach for coping with imprecise failure probabilities. In fact, the proposed framework allows to decouple the propagation of aleatoric and epistemic uncertainty.
|
Fuenzalida, C., Jerez-Hanckes, C., & McClarren, R. G. (2019). Uncertainty Quantification For Multigroup Diffusion Equations Using Sparse Tensor Approximations. SIAM J. Sci. Comput., 41(3), B545–B575.
Abstract: We develop a novel method to compute first and second order statistical moments of the neutron kinetic density inside a nuclear system by solving the energy-dependent neutron diffusion equation. Randomness comes from the lack of precise knowledge of external sources as well as of the interaction parameters, known as cross sections. Thus, the density is itself a random variable. As Monte Carlo simulations entail intense computational work, we are interested in deterministic approaches to quantify uncertainties. By assuming as given the first and second statistical moments of the excitation terms, a sparse tensor finite element approximation of the first two statistical moments of the dependent variables for each energy group can be efficiently computed in one run. Numerical experiments provided validate our derived convergence rates and point to further research avenues.
|
Fustos-Toribio, I., Manque-Roa, N., Vasquez Antipan, D., Hermosilla Sotomayor, M., & Gonzalez, V. L. (2022). Rainfall-induced landslide early warning system based on corrected mesoscale numerical models: an application for the southern Andes. Nat. Hazards Earth Syst. Sci., 22(6), 2169–2183.
Abstract: Rainfall-induced landslides (RILs) are an issue in the southern Andes nowadays. RILs cause loss of life and damage to critical infrastructure. Rainfall-induced landslide early warning systems (RILEWSs) can reduce and mitigate economic and social damages related to RIL events. The southern Andes do not have an operational-scale RILEWS yet. In this contribution, we present a pre-operational RILEWS based on the Weather and Research Forecast (WRF) model and geomorphological features coupled to logistic models in the southern Andes. The models have been forced using precipitation simulations. We correct the precipitation derived from WRF using 12 weather stations through a bias correction approach. The models were trained using 57 well-characterized RILs and validated by ROC analysis. We show that WRF has strong limitations in representing the spatial variability in the precipitation. Therefore, accurate precipitation needs a bias correction in the study zone. We used accurate precipitation simulation and slope, demonstrating a high predicting capacity (area under the curve, AUC, of 0.80). We conclude that our proposal could be suitable at an operational level under determined conditions. A reliable RIL database and operational weather networks that allow real-time correction of the mesoscale model in the implemented zone are needed. The RILEWSs could become a support to decision-makers during extreme-precipitation events related to climate change in the south of the Andes.
|
Gordon, M. A., Vargas, F. J., & Peters, A. A. (2021). Comparison of Simple Strategies for Vehicular Platooning With Lossy Communication. IEEE Access, 9, 103996–104010.
Abstract: This paper studies vehicle platooning with communication channels subject to random data loss. We focus on homogeneous discrete-time platoons in a predecessor-following topology with a constant time headway policy. We assume that each agent in the platoon sends its current position to the immediate follower through a lossy channel modeled as a Bernoulli process. To reduce the negative effects of data loss over the string stability and performance of the platoon, we use simple strategies that modify the measurement, error, and control signals of the feedback control loop, in each vehicle, when a dropout occurs. Such strategies are based on holding the previous value, dropping to zero, or replacing with a prediction based on a simple linear extrapolation. We performed a simulation-based comparison among a set of different strategies, and found that some strategies are favorable in terms of performance, while some others present improvements for string stabilization. These results strongly suggest that proper design of compensation schemes for the communications of interconnected multi-agent systems plays an important role in their performance and their scalability properties.
|
Guevara, E., Babonneau, F., Homem-de-Mello, T., & Moret, S. (2020). A machine learning and distributionally robust optimization framework for strategic energy planning under uncertainty. Appl. Energy, 271, 18 pp.
Abstract: This paper investigates how the choice of stochastic approaches and distribution assumptions impacts strategic investment decisions in energy planning problems. We formulate a two-stage stochastic programming model assuming different distributions for the input parameters and show that there is significant discrepancy among the associated stochastic solutions and other robust solutions published in the literature. To remedy this sensitivity issue, we propose a combined machine learning and distributionally robust optimization (DRO) approach which produces more robust and stable strategic investment decisions with respect to uncertainty assumptions. DRO is applied to deal with ambiguous probability distributions and Machine Learning is used to restrict the DRO model to a subset of important uncertain parameters ensuring computational tractability. Finally, we perform an out-of-sample simulation process to evaluate solutions performances. The Swiss energy system is used as a case study all along the paper to validate the approach.
|
Homem-de-Mello, T., Kong, Q. X., & Godoy-Barba, R. (2022). A Simulation Optimization Approach for the Appointment Scheduling Problem with Decision-Dependent Uncertainties. INFORMS J. Comput., Early Access.
Abstract: The appointment scheduling problem (ASP) studies how to manage patient arrivals to a healthcare system to improve system performance. An important challenge occurs when some patients may not show up for an appointment. Although the ASP is well studied in the literature, the vast majority of the existing work does not consider the well-observed phenomenon that patient no-show is influenced by the appointment time, the usual decision variable in the ASP. This paper studies the ASP with random service time (exogenous uncertainty) with known distribution and patient decision-dependent no-show behavior (endogenous uncertainty). This problem belongs to the class of stochastic optimization with decision-dependent uncertainties. Such problems are notoriously difficult as they are typically nonconvex. We propose a stochastic projected gradient path (SPGP) method to solve the problem, which requires the development of a gradient estimator of the objective function-a nontrivial task, as the literature on gradient-based optimization algorithms for problems with decision-dependent uncertainty is very scarce and unsuitable for our model. Our method can solve the ASP problem under arbitrarily smooth show-up probability functions. We present solutions under different patterns of no-show behavior and demonstrate that breaking the assumption of constant show-up probability substantially changes the scheduling solutions. We conduct numerical experiments in a variety of settings to compare our results with those obtained with a distributionally robust optimization method developed in the literature. The cost reduction obtained with our method, which we call the value of distribution information, can be interpreted as how much the system performance can be improved by knowing the distribution of the service times, compared to not knowing it. We observe that the value of distribution information is up to 31% of the baseline cost, and that such value is higher when the system is crowded or/and the waiting time cost is relatively high.
Summary of Contribution: This paper studies an appointment scheduling problem under time-dependent patient no-show behavior, a situation commonly observed in practice. The problem belongs to the class of stochastic optimization problems with decision-dependent uncertainties in the operations research literature. Such problems are notoriously difficult to solve as a result of the lack of convexity, and the present case requires different techniques because of the presence of continuous distributions for the service times. A stochastic projected gradient path method, which includes the development of specialized techniques to estimate the gradient of the objective function, is proposed to solve the problem. For problems with a similar structure, the algorithm can be applied once the gradient estimator of the objective function is obtained. Extensive numerical studies are presented to demonstrate the quality of the solutions, the importance of modeling time-dependent no-shows in appointment scheduling, and the value of using distribution information about the service times. |
Lagos, G., Espinoza, D., Moreno, E., & Vielma, J. P. (2015). Restricted risk measures and robust optimization. Eur. J. Oper. Res., 241(3), 771–782.
Abstract: In this paper we consider characterizations of the robust uncertainty sets associated with coherent and distortion risk measures. In this context we show that if we are willing to enforce the coherent or distortion axioms only on random variables that are affine or linear functions of the vector of random parameters, we may consider some new variants of the uncertainty sets determined by the classical characterizations. We also show that in the finite probability case these variants are simple transformations of the classical sets. Finally we present results of computational experiments that suggest that the risk measures associated with these new uncertainty sets can help mitigate estimation errors of the Conditional Value-at-Risk. (C) 2014 Elsevier B.V. All rights reserved.
|
Navarro, A., Favereau, M., Lorca, A., Olivares, D., & Negrete-Pincetic, M. (2024). Medium-term stochastic hydrothermal scheduling with short-term operational effects for large-scale power and water networks. Appl. Energy, 358, 122554.
Abstract: The high integration of variable renewable sources in electric power systems entails a series of challenges inherent to their intrinsic variability. A critical challenge is to correctly value the water available in reservoirs in hydrothermal systems, considering the flexibility that it provides. In this context, this paper proposes a medium -term multistage stochastic optimization model for the hydrothermal scheduling problem solved with the stochastic dual dynamic programming algorithm. The proposed model includes operational constraints and simplified mathematical expressions of relevant operational effects that allow more informed assessment of the water value by considering, among others, the flexibility necessary for the operation of the system. In addition, the hydrological uncertainty in the model is represented by a vector autoregressive process, which allows capturing spatio-temporal correlations between the different hydro inflows. A calibration method for the simplified mathematical expressions of operational effects is also proposed, which allows a detailed shortterm operational model to be correctly linked to the proposed medium -term linear model. Through extensive experiments for the Chilean power system, the results show that the difference between the expected operating costs of the proposed medium -term model, and the costs obtained through a detailed short-term operational model was only 0.1%, in contrast to the 9.3% difference obtained when a simpler base model is employed. This shows the effectiveness of the proposed approach. Further, this difference is also reflected in the estimation of the water value, which is critical in water shortage situations.
|
Ni, P. H., Jerez, D. J., Fragkoulis, V. C., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Operator Norm-Based Statistical Linearization to Bound the First Excursion Probability of Nonlinear Structures Subjected to Imprecise Stochastic Loading. ASCE-ASME J. Risk Uncertain. Eng. Syst. A-Civ. Eng., 8(1), 04021086.
Abstract: This paper presents a highly efficient approach for bounding the responses and probability of failure of nonlinear models subjected to imprecisely defined stochastic Gaussian loads. Typically, such computations involve solving a nested double-loop problem, where the propagation of the aleatory uncertainty has to be performed for each realization of the epistemic parameters. Apart from near-trivial cases, such computation is generally intractable without resorting to surrogate modeling schemes, especially in the context of performing nonlinear dynamical simulations. The recently introduced operator norm framework allows for breaking this double loop by determining those values of the epistemic uncertain parameters that produce bounds on the probability of failure a priori. However, the method in its current form is only applicable to linear models due to the adopted assumptions in the derivation of the involved operator norms. In this paper, the operator norm framework is extended and generalized by resorting to the statistical linearization methodology to
|