Araneda, A. A., & Villena, M. J. (2021). Computing the CEV option pricing formula using the semiclassical approximation of path integral. J. Comput. Appl. Math., 388, 113244.
Abstract: The CEV model allows volatility to change with the underlying price, capturing a basic empirical regularity very relevant for option pricing, such as the volatility smile. Nevertheless, the standard CEV solution, using the noncentral chisquare approach, still presents high computational times. In this paper, the CEV option pricing formula is computed using the semiclassical approximation of Feynman's path integral. Our simulations show that the method is quite efficient and accurate compared to the standard CEV solution considering the pricing of European call options. (C) 2020 Elsevier B.V. All rights reserved.

Aylwin, R., & JerezHanckes, C. (2023). FiniteElement Domain Approximation for Maxwell Variational Problems on Curved Domains. SIAM J. Numer. Anal., 61(3), 1139–1171.
Abstract: We consider the problem of domain approximation in finite element methods for Maxwell equations on curved domains, i.e., when affine or polynomial meshes fail to exactly cover the domain of interest. In such cases, one is forced to approximate the domain by a sequence of polyhedral domains arising from inexact meshes. We deduce conditions on the quality of these approximations that ensure rates of error convergence between discrete solutions  in the approximate domains  to the continuous one in the original domain.

Barrera, J., Cancela, H., & Moreno, E. (2015). Topological optimization of reliable networks under dependent failures. Oper. Res. Lett., 43(2), 132–136.
Abstract: We address the design problem of a reliable network. Previous work assumes that link failures are independent. We discuss the impact of dropping this assumption. We show that under a commoncause failure model, dependencies between failures can affect the optimal design. We also provide an integerprogramming formulation to solve this problem. Furthermore, we discuss how the dependence between the links that participate in the solution and those that do not can be handled. Other dependency models are discussed as well. (C) 2014 Elsevier B.V. All rights reserved.

Barrera, J., HomemDeMello, T., Moreno, E., Pagnoncelli, B. K., & Canessa, G. (2016). Chanceconstrained problems and rare events: an importance sampling approach. Math. Program., 157(1), 153–189.
Abstract: We study chanceconstrained problems in which the constraints involve the probability of a rare event. We discuss the relevance of such problems and show that the existing samplingbased algorithms cannot be applied directly in this case, since they require an impractical number of samples to yield reasonable solutions. We argue that importance sampling (IS) techniques, combined with a Sample Average Approximation (SAA) approach, can be effectively used in such situations, provided that variance can be reduced uniformly with respect to the decision variables. We give sufficient conditions to obtain such uniform variance reduction, and prove asymptotic convergence of the combined SAAIS approach. As it often happens with IS techniques, the practical performance of the proposed approach relies on exploiting the structure of the problem under study; in our case, we work with a telecommunications problem with Bernoulli input distributions, and show how variance can be reduced uniformly over a suitable approximation of the feasibility set by choosing proper parameters for the IS distributions. Although some of the results are specific to this problem, we are able to draw general insights that can be useful for other classes of problems. We present numerical results to illustrate our findings.

CaamañoCarrillo, C., Bevilacqua, M., López, C., & MoralesOñate, V. (2024). Nearest neighbors weighted composite likelihood based on pairs for (non)Gaussian massive spatial data with an application to Tukeyhh random fields estimation. Comput. Stat. Data Anal., 191, 107887.
Abstract: A highly scalable method for (non)Gaussian random fields estimation is proposed. In particular, a novel (a) symmetric weight function based on nearest neighbors for the method of maximum weighted composite likelihood based on pairs (WCLP) is studied. The new weight function allows estimating massive (up to millions) spatial datasets and improves the statistical efficiency of the WCLP method using symmetric weights based on distances, as shown in the numerical examples. As an application of the proposed method, the estimation of a novel nonGaussian random field named Tukeyhh random field that has flexible marginal distributions (possibly skewed and/or heavytailed) is considered. In an extensive simulation study the statistical efficiency of the proposed nearest neighbors WCLP method with respect to the WCLP method using weights based on distances is explored when estimating the parameters of the Tukeyhh random field. In the Gaussian case the proposed method is compared with the Vecchia approximation from computational and statistical viewpoints. Finally, the effectiveness of the proposed methodology is illustrated by estimating a large dataset of mean temperatures in South America. The proposed methodology has been implemented in an opensource package for the R statistical environment.

Carrasco, R. A., Iyengar, G., & Stein, C. (2018). Resource cost aware scheduling. Eur. J. Oper. Res., 269(2), 621–632.
Abstract: We are interested in the scheduling problem where there are several different resources that determine the speed at which a job runs and we pay depending on the amount of each resource that we use. This work is an extension of the resource dependent job processing time problem and the energy aware scheduling problems. We develop a new constant factor approximation algorithm for resource cost aware scheduling problems: the objective is to minimize the sum of the total cost of resources and the total weighted completion time in the one machine nonpreemptive setting, allowing for arbitrary precedence constraints and release dates. Our algorithm handles general jobdependent resource cost functions. We also analyze the practical performance of our algorithms, showing that it is significantly superior to the theoretical bounds and in fact it is very close to optimal. The analysis is done using simulations and real instances, which are left publicly available for future benchmarks. We also present additional heuristic improvements and we study their performance in other settings. (C) 2018 Elsevier B.V. All rights reserved.

EscapilInchauspe, P., & JerezHanckes, C. (2021). Biparametric operator preconditioning. Comput. Math. Appl., 102, 220–232.
Abstract: We extend the operator preconditioning framework Hiptmair (2006) [10] to PetrovGalerkin methods while accounting for parameterdependent perturbations of both variational forms and their preconditioners, as occurs when performing numerical approximations. By considering different perturbation parameters for the original form and its preconditioner, our biparametric abstract setting leads to robust and controlled schemes. For Hilbert spaces, we derive exhaustive linear and superlinear convergence estimates for iterative solvers, such as hindependent convergence bounds, when preconditioning with lowaccuracy or, equivalently, with highly compressed approximations.

Espinoza, D., & Moreno, E. (2014). A primaldual aggregation algorithm for minimizing conditional valueatrisk in linear programs. Comput. Optim. Appl., 59(3), 617–638.
Abstract: Recent years have seen growing interest in coherent risk measures, especially in Conditional ValueatRisk (). Since is a convex function, it is suitable as an objective for optimization problems when we desire to minimize risk. In the case that the underlying distribution has discrete support, this problem can be formulated as a linear programming (LP) problem. Over more general distributions, recent techniques, such as the sample average approximation method, allow to approximate the solution by solving a series of sampled problems, although the latter approach may require a large number of samples when the risk measures concentrate on the tail of the underlying distributions. In this paper we propose an automatic primaldual aggregation scheme to exactly solve these special structured LPs with a very large number of scenarios. The algorithm aggregates scenarios and constraints in order to solve a smaller problem, which is automatically disaggregated using the information of its dual variables. We compare this algorithm with other common approaches found in related literature, such as an improved formulation of the full problem, cutgeneration schemes and other problemspecific approaches available in commercial software. Extensive computational experiments are performed on portfolio and general LP instances.

Fuenzalida, C., JerezHanckes, C., & McClarren, R. G. (2019). Uncertainty Quantification For Multigroup Diffusion Equations Using Sparse Tensor Approximations. SIAM J. Sci. Comput., 41(3), B545–B575.
Abstract: We develop a novel method to compute first and second order statistical moments of the neutron kinetic density inside a nuclear system by solving the energydependent neutron diffusion equation. Randomness comes from the lack of precise knowledge of external sources as well as of the interaction parameters, known as cross sections. Thus, the density is itself a random variable. As Monte Carlo simulations entail intense computational work, we are interested in deterministic approaches to quantify uncertainties. By assuming as given the first and second statistical moments of the excitation terms, a sparse tensor finite element approximation of the first two statistical moments of the dependent variables for each energy group can be efficiently computed in one run. Numerical experiments provided validate our derived convergence rates and point to further research avenues.

Munoz, F. D., & Mills, A. D. (2015). Endogenous Assessment of the Capacity Value of Solar PV in Generation Investment Planning Studies. IEEE Trans. Sustain. Energy, 6(4), 1574–1585.
Abstract: There exist several different reliabilityand approximationbased methods to determine the contribution of solar resources toward resource adequacy. However, most of these approaches require knowing in advance the installed capacities of both conventional and solar generators. This is a complication since generator capacities are actually decision variables in capacity planning studies. In this paper, we study the effect of time resolution and solar PV penetration using a planning model that accounts for the full distribution of generator outages and solar resource variability. We also describe a modification of a standard deterministic planning model that enforces a resource adequacy target through a reserve margin constraint. Our numerical experiments show that at least 50 days worth of data are necessary to approximate the results of the fullresolution model with a maximum error of 2.5% on costs and capacity. We also show that the amount of displaced capacity of conventional generation decreases rapidly as the penetration of solar PV increases. We find that using an exogenously defined and constant capacity value based on timeseries data can yield relatively accurate results for small penetration levels. For higher penetration levels, the modified deterministic planning model better captures avoided costs and the decreasing value of solar PV.

Wanke, P., Ewbank, H., Leiva, V., & Rojas, F. (2016). Inventory management for new products with triangularly distributed demand and leadtime. Comput. Oper. Res., 69, 97–108.
Abstract: This paper proposes a computational methodology to deal with the inventory management of new products by using the triangular distribution for both demand per unit time and leadtime. The distribution for demand during leadtime (or leadtime demand) corresponds to the sum of demands per unit time, which is difficult to obtain. We consider the triangular distribution because it is useful when a distribution is unknown due to data unavailability or problems to collect them. We provide an approach to estimate the probability density function of the unknown leadtime demand distribution and use it to establish the suitable inventory model for new products by optimizing the associated costs. We evaluate the performance of the proposed methodology with simulated and realworld demand data. This methodology may be a decision support tool for managers dealing with the measurement of demand uncertainty in new products. (C) 2015 Elsevier Ltd. All rights reserved.
