
AlvarezMiranda, E., & Pereira, J. (2017). Designing and constructing networks under uncertainty in the construction stage: Definition and exact algorithmic approach. Comput. Oper. Res., 81, 178–191.
Abstract: The present work proposes a novel Network Optimization problem whose core is to combine both network design and network construction scheduling under uncertainty into a single twostage robust optimization model. The firststage decisions correspond to those of a classical network design problem, while the secondstage decisions correspond to those of a network construction scheduling problem (NCS) under uncertainty. The resulting problem, which we will refer to as the TwoStage Robust Network Design and Construction Problem (2SRNDC), aims at providing a modeling framework in which the design decision not only depends on the design costs (e.g., distances) but also on the corresponding construction plan (e.g., time to provide service to costumers). We provide motivations, mixed integer programming formulations, and an exact algorithm for the 2SRNDC. Experimental results on a large set of instances show the effectiveness of the model in providing robust solutions, and the capability of the proposed algorithm to provide good solutions in reasonable running times. (C) 2017 Elsevier Ltd. All rights reserved.



AlvarezMiranda, E., CamposValdes, C., Quiroga, M. M., MorenoFaguett, M., & Pereira, J. (2020). A MultiCriteria Pen for Drawing Fair Districts: When Democratic and Demographic Fairness Matter. Mathematics, 8(9), 27 pp.
Abstract: Electoral systems are modified by individuals who have incentives to bias the rules for their political advantage (i.e., gerrymandering). To prevent gerrymandering, legislative institutions can rely on mathematical tools to guarantee democratic fairness and territorial contiguity. These tools have been successfully used in the past; however, there is a need to accommodate additional meanings of the term fairness within the electoral systems of modern democracies. In this paper, we present an optimization framework that considers multiple criteria for drawing districts and assigning the number of representatives. Besides some typical districting criteria (malapportionment and contiguity), we introduce novel criteria for ensuring territorial equilibrium and incentives for candidates to deploy their representation efforts fairly during their campaign and period in office. We test the method, which we denote as Multicriteria Pen, in a recent and a forthcoming reform of the Chilean electoral system. The results show the potential of our tool to improve the current territorial design and offers insights on the motivations, objectives, and deficiencies of both reform plans.



Averbakh, I., & Pereira, J. (2018). Lateness Minimization in Pairwise Connectivity Restoration Problems. INFORMS J. Comput., 30(3), 522–538.
Abstract: A network is given whose edges need to be constructed (or restored after a disaster). The lengths of edges represent the required construction/restoration times given available resources, and one unit of length of the network can be constructed per unit of time. All points of the network are accessible for construction at any time. For each pair of vertices, a due date is given. It is required to find a construction schedule that minimizes the maximum lateness of all pairs of vertices, where the lateness of a pair is the difference between the time when the pair becomes connected by an already constructed path and the pair's due date. We introduce the problem and analyze its structural properties, present a mixedinteger linear programming formulation, develop a number of lower bounds that are integrated in a branchandbound algorithm, and discuss results of computational experiments both for instances based on randomly generated networks and for instances based on 2010 Chilean earthquake data.



Barrera, J., Moreno, E., & Munoz, G. (2022). Convex envelopes for rayconcave functions. Optim. Let., to appear.
Abstract: Convexification based on convex envelopes is ubiquitous in the nonlinear optimization literature. Thanks to considerable efforts of the optimization community for decades, we are able to compute the convex envelopes of a considerable number of functions that appear in practice, and thus obtain tight and tractable approximations to challenging problems. We contribute to this line of work by considering a family of functions that, to the best of our knowledge, has not been considered before in the literature. We call this family rayconcave functions. We show sufficient conditions that allow us to easily compute closedform expressions for the convex envelope of rayconcave functions over arbitrary polytopes. With these tools, we are able to provide new perspectives to previously known convex envelopes and derive a previously unknown convex envelope for a function that arises in probability contexts.



Barrera, J., Moreno, E., Munoz, G., & Romero, P. (2022). Exact reliability optimization for seriesparallel graphs using convex envelopes. Networks, to appear.
Abstract: Given its wide spectrum of applications, the classical problem of allterminal network reliability evaluation remains a highly relevant problem in network design. The associated optimization problemto find a network with the best possible reliability under multiple constraintspresents an even more complex challenge, which has been addressed in the scientific literature but usually under strong assumptions over failures probabilities and/or the network topology. In this work, we propose a novel reliability optimization framework for network design with failures probabilities that are independent but not necessarily identical. We leverage the lineartime evaluation procedure for network reliability in the seriesparallel graphs of Satyanarayana and Wood (1985) to formulate the reliability optimization problem as a mixedinteger nonlinear optimization problem. To solve this nonconvex problem, we use classical convex envelopes of bilinear functions, introduce custom cutting planes, and propose a new family of convex envelopes for expressions that appear in the evaluation of network reliability. Furthermore, we exploit the refinements produced by spatial branchandbound to locally strengthen our convex relaxations. Our experiments show that, using our framework, one can efficiently obtain optimal solutions in challenging instances of this problem.



Beck, A. T., Ribeiro, L. D., Valdebenito, M., & Jensen, H. (2022). RiskBased Design of Regular Plane Frames Subject to Damage by Abnormal Events: A Conceptual Study.148(1), 04021229.
Abstract: Constructed facilities should be robust with respect to the loss of loadbearing elements due to abnormal events. Yet, strengthening structures to withstand such damage has a significant impact on construction costs. Strengthening costs should be justified by the threat and should result in smaller expected costs of progressive collapse. In regular frame structures, beams and columns compete for the strengthening budget. In this paper, we present a riskbased formulation to address the optimal design of regular plane frames under element loss conditions. We address the threat probabilities for which strengthening has better costbenefit than usual design, for different frame configurations, and study the impacts of strengthening extent and cost. The riskbased optimization reveals optimum points of compromise between competing failure modes: local bending of beams, local crushing of columns, and global pancake collapse, for frames of different aspect ratios. The conceptual study is based on a simple analytical model for progressive collapse, but it provides relevant insight for the design and strengthening of real structures.



Canessa, G., Moreno, E., & Pagnoncelli, B. K. (2021). The riskaverse ultimate pit problem. Optim. Eng., 22, 2655–2678.
Abstract: In this work, we consider a riskaverse ultimate pit problem where the grade of the mineral is uncertain. We derive conditions under which we can generate a set of nested pits by varying the risk level instead of using revenue factors. We propose two properties that we believe are desirable for the problem: risk nestedness, which means the pits generated for different risk aversion levels should be contained in one another, and additive consistency, which states that preferences in terms of order of extraction should not change if independent sectors of the mine are added as precedences. We show that only an entropic risk measure satisfies these properties and propose a twostage stochastic programming formulation of the problem, including an efficient approximation scheme to solve it. We illustrate our approach in a small selfconstructed example, and apply our approximation scheme to a realworld section of the Andina mine, in Chile.



Cho, A. D., Carrasco, R. A., & Ruz, G. A. (2022). Improving Prescriptive Maintenance by Incorporating PostPrognostic Information Through Chance Constraints. IEEE Access, 10, 55924–55932.
Abstract: Maintenance is one of the critical areas in operations in which a careful balance between preventive costs and the effect of failures is required. Thanks to the increasing data availability, decisionmakers can now use models to better estimate, evaluate, and achieve this balance. This work presents a maintenance scheduling model which considers prognostic information provided by a predictive system. In particular, we developed a prescriptive maintenance system based on runtofailure signal segmentation and a Long Short Term Memory (LSTM) neural network. The LSTM network returns the prediction of the remaining useful life when a fault is present in a component. We incorporate such predictions and their inherent errors in a decision support system based on a stochastic optimization model, incorporating them via chance constraints. These constraints control the number of failed components and consider the physical distance between them to reduce sparsity and minimize the total maintenance cost. We show that this approach can compute solutions for relatively large instances in reasonable computational time through experimental results. Furthermore, the decisionmaker can identify the correct operating point depending on the balance between costs and failure probability.



Contreras, M., Pellicer, R., & Villena, M. (2017). Dynamic optimization and its relation to classical and quantum constrained systems. Physica A, 479, 12–25.
Abstract: We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two secondclass constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closedloop lambdastrategy, the optimality condition for the action gives a consistency relation, which is associated to the HamiltonJacobiBellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Psi (x, t) = e(is(x,t)) in the quantum Schrodinger equation, a nonlinear partial equation is obtained for the S function. For the righthand side quantization, this is the HamiltonJacobiBellman equation, when S(x, t) is identified with the optimal value function. Thus, the HamiltonJacobiBellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem. (C) 2017 Elsevier B.V. All rights reserved.



Dang, C., Wei, P. F., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Interval uncertainty propagation by a parallel Bayesian global optimization method. Appl. Math. Model., 108, 220–235.
Abstract: This paper is concerned with approximating the scalar response of a complex computational model subjected to multiple input interval variables. Such task is formulated as finding both the global minimum and maximum of a computationally expensive blackbox function over a prescribed hyperrectangle. On this basis, a novel nonintrusive method, called `tripleengine parallel Bayesian global optimization', is proposed. The method begins by assuming a Gaussian process prior (which can also be interpreted as a surrogate model) over the response function. The main contribution lies in developing a novel infill sampling criterion, i.e., tripleengine pseudo expected improvement strategy, to identify multiple promising points for minimization and/or maximization based on the past observations at each iteration. By doing so, these identified points can be evaluated on the real response function in parallel. Besides, another potential benefit is that both the lower and upper bounds of the model response can be obtained with a single run of the developed method. Four numerical examples with varying complexity are investigated to demonstrate the proposed method against some existing techniques, and results indicate that significant computational savings can be achieved by making full use of prior knowledge and parallel computing.



Guevara, E., Babonneau, F., HomemdeMello, T., & Moret, S. (2020). A machine learning and distributionally robust optimization framework for strategic energy planning under uncertainty. Appl. Energy, 271, 18 pp.
Abstract: This paper investigates how the choice of stochastic approaches and distribution assumptions impacts strategic investment decisions in energy planning problems. We formulate a twostage stochastic programming model assuming different distributions for the input parameters and show that there is significant discrepancy among the associated stochastic solutions and other robust solutions published in the literature. To remedy this sensitivity issue, we propose a combined machine learning and distributionally robust optimization (DRO) approach which produces more robust and stable strategic investment decisions with respect to uncertainty assumptions. DRO is applied to deal with ambiguous probability distributions and Machine Learning is used to restrict the DRO model to a subset of important uncertain parameters ensuring computational tractability. Finally, we perform an outofsample simulation process to evaluate solutions performances. The Swiss energy system is used as a case study all along the paper to validate the approach.



Henriquez, P. A., & Ruz, G. A. (2019). Noise reduction for nearinfrared spectroscopy data using extreme learning machines. Eng. Appl. Artif. Intell., 79, 13–22.
Abstract: The near infrared (NIR) spectra technique is an effective approach to predict chemical properties and it is typically applied in petrochemical, agricultural, medical, and environmental sectors. NIR spectra are usually of very high dimensions and contain huge amounts of information. Most of the information is irrelevant to the target problem and some is simply noise. Thus, it is not an easy task to discover the relationship between NIR spectra and the predictive variable. However, this kind of regression analysis is one of the main topics of machine learning. Thus machine learning techniques play a key role in NIR based analytical approaches. Preprocessing of NIR spectral data has become an integral part of chemometrics modeling. The objective of the preprocessing is to remove physical phenomena (noise) in the spectra in order to improve the regression or classification model. In this work, we propose to reduce the noise using extreme learning machines which have shown good predictive performances in regression applications as well as in large dataset classification tasks. For this, we use a novel algorithm called CPLELM, which has an architecture in parallel based on a nonlinear layer in parallel with another nonlinear layer. Using the soft margin loss function concept, we incorporate two Lagrange multipliers with the objective of including the noise of spectral data. Six reallife dataset were analyzed to illustrate the performance of the developed models. The results for regression and classification problems confirm the advantages of using the proposed method in terms of root mean square error and accuracy.



HomemdeMello, T., Kong, Q. X., & GodoyBarba, R. (2022). A Simulation Optimization Approach for the Appointment Scheduling Problem with DecisionDependent Uncertainties. INFORMS J. Comput., Early Access.
Abstract: The appointment scheduling problem (ASP) studies how to manage patient arrivals to a healthcare system to improve system performance. An important challenge occurs when some patients may not show up for an appointment. Although the ASP is well studied in the literature, the vast majority of the existing work does not consider the wellobserved phenomenon that patient noshow is influenced by the appointment time, the usual decision variable in the ASP. This paper studies the ASP with random service time (exogenous uncertainty) with known distribution and patient decisiondependent noshow behavior (endogenous uncertainty). This problem belongs to the class of stochastic optimization with decisiondependent uncertainties. Such problems are notoriously difficult as they are typically nonconvex. We propose a stochastic projected gradient path (SPGP) method to solve the problem, which requires the development of a gradient estimator of the objective functiona nontrivial task, as the literature on gradientbased optimization algorithms for problems with decisiondependent uncertainty is very scarce and unsuitable for our model. Our method can solve the ASP problem under arbitrarily smooth showup probability functions. We present solutions under different patterns of noshow behavior and demonstrate that breaking the assumption of constant showup probability substantially changes the scheduling solutions. We conduct numerical experiments in a variety of settings to compare our results with those obtained with a distributionally robust optimization method developed in the literature. The cost reduction obtained with our method, which we call the value of distribution information, can be interpreted as how much the system performance can be improved by knowing the distribution of the service times, compared to not knowing it. We observe that the value of distribution information is up to 31% of the baseline cost, and that such value is higher when the system is crowded or/and the waiting time cost is relatively high.
Summary of Contribution: This paper studies an appointment scheduling problem under timedependent patient noshow behavior, a situation commonly observed in practice. The problem belongs to the class of stochastic optimization problems with decisiondependent uncertainties in the operations research literature. Such problems are notoriously difficult to solve as a result of the lack of convexity, and the present case requires different techniques because of the presence of continuous distributions for the service times. A stochastic projected gradient path method, which includes the development of specialized techniques to estimate the gradient of the objective function, is proposed to solve the problem. For problems with a similar structure, the algorithm can be applied once the gradient estimator of the objective function is obtained. Extensive numerical studies are presented to demonstrate the quality of the solutions, the importance of modeling timedependent noshows in appointment scheduling, and the value of using distribution information about the service times.



Inzunza, A., Munoz, F. D., & Moreno, R. (2021). Measuring the effects of environmental policies on electricity markets risk. Energy Econ., 102, 105470.
Abstract: This paper studies how environmental policies, such as renewable portfolio standards (RPS) and carbon taxes, might contribute to reducing risk exposure in the electricity generation sector. We illustrate this effect by first computing longterm market equilibria of the Chilean generation sector for the year 2035 using a riskaverse planning model, considering uncertainty of hydrological scenarios and fossil fuel prices as well as distinct levels of risk aversion, but assuming no environmental policies in place. We then compare these riskaverse equilibria to generation portfolios obtained by imposing several levels of RPS and carbon taxes in a market with riskneutral firms, separately. Our results show that the implementation of both policies can provide incentives for investments in portfolios of generation technologies that limit the risk exposure of the system, particularly when high levels of RPS (35%) or high carbon taxes (35 $/tonCO2) are applied. However, we find that in the case of a hydrothermal system, the resulting market equilibria under RPS policies yield expected generation cost and risk levels (i.e. standard deviation of costs) that are more similar to the efficient portfolios determined using a riskaverse planning model than the ones we find under the carbon tax.



Kalahasthi, L., HolguinVeras, J., & Yushimito, W. F. (2022). A freight origindestination synthesis model with mode choice. Transp. Res. ELogist. Transp. Rev., 157, 102595.
Abstract: This paper develops a novel procedure to conduct a Freight OriginDestination Synthesis (FODS) that jointly estimates the trip distribution, mode choice, and the empty trips by truck and rail that provide the best match to the observed freight traffic counts. Four models are integrated: (1) a gravity model for trip distribution, (2) a binary logit model for mode choice, (3) a Noortman and Van Es' model for truck, and (4) a Noortman and Van Es' model for rail empty trips. The estimation process entails an iterative minimization of a nonconvex objective function, the summation of squared errors of the estimated truck and rail traffic counts with respect to the five model parameters. Of the two methods tested to address the nonconvexity, an interior point method with a set of random starting points (MultiStart algorithm) outperformed the Ordinary Least Squared (OLS) inference technique. The potential of this methodology is examined using a hypothetical example of developing a nationwide freight demand model for Bangladesh. This research improves the existing FODS techniques that use readily available secondary data such as traffic counts and link costs, allowing transportation planners to evaluate policy outcomes without needing expensive freight data collection. This paper presents the results, model validation, limitations, and future scope for improvements.



Lagos, T., Armstrong, M., HomemdeMello, T., Lagos, G., & Saure, D. (2021). A framework for adaptive openpit mining planning under geological uncertainty. Optim. Eng., 72, 102086.
Abstract: Mine planning optimization aims at maximizing the profit obtained from extracting valuable ore. Beyond its theoretical complexitythe openpit mining problem with capacity constraints reduces to a knapsack problem with precedence constraints, which is NPhardpractical instances of the problem usually involve a large to very large number of decision variables, typically of the order of millions for large mines. Additionally, any comprehensive approach to mine planning ought to consider the underlying geostatistical uncertainty as only limited information obtained from drill hole samples of the mineral is initially available. In this regard, as blocks are extracted sequentially, information about the ore grades of blocks yet to be extracted changes based on the blocks that have already been mined. Thus, the problem lies in the class of multiperiod large scale stochastic optimization problems with decisiondependent information uncertainty. Such problems are exceedingly hard to solve, so approximations are required. This paper presents an adaptive optimization scheme for multiperiod production scheduling in openpit mining under geological uncertainty that allows us to solve practical instances of the problem. Our approach is based on a rollinghorizon adaptive optimization framework that learns from new information that becomes available as blocks are mined. By considering the evolution of geostatistical uncertainty, the proposed optimization framework produces an operational policy that reduces the risk of the production schedule. Our numerical tests with mines of moderate sizes show that our rolling horizon adaptive policy gives consistently better results than a nonadaptive stochastic optimization formulation, for a range of realistic problem instances.



Ljubic, I., & Moreno, E. (2018). Outer approximation and submodular cuts for maximum capture facility location problems with random utilities. Eur. J. Oper. Res., 266(1), 46–56.
Abstract: We consider a family of competitive facility location problems in which a “newcomer” company enters the market and has to decide where to locate a set of new facilities so as to maximize its market share. The multinomial logit model is used to estimate the captured customer demand. We propose a first branchandcut approach for this family of difficult mixedinteger nonlinear problems. Our approach combines two types of cutting planes that exploit particular properties of the objective function: the first one are the outerapproximation cuts and the second one are the submodular cuts. The approach is computationally evaluated on three datasets from the recent literature. The obtained results show that our new branchandcut drastically outperforms stateoftheart exact approaches, both in terms of the computing times, and in terms of the number of instances solved to optimality. (C) 2017 Elsevier B.V. All rights reserved.



Matus, O., Barrera, J., Moreno, E., & Rubino, G. (2019). On the MarshallOlkin Copula Model for Network Reliability Under Dependent Failures. IEEE Trans. Reliab., 68(2), 451–461.
Abstract: The MarshallOlkin (MO) copulamodel has emerged as the standard tool for capturing dependence between components in failure analysis in reliability. In this model, shocks arise at exponential random times, that affect one or several components inducing a natural correlation in the failure process. However, because the number of parameter of the model grows exponentially with the number of components, MO suffers of the “curse of dimensionality.” MO models are usually intended to be applied to design a network before its construction; therefore, it is natural to assume that only partial information about failure behavior can be gathered, mostly from similar existing networks. To construct such an MO model, we propose an optimization approach to define the shock's parameters in the MO copula, in order to match marginal failures probabilities and correlations between these failures. To deal with the exponential number of parameters of this problem, we use a columngeneration technique. We also discuss additional criteria that can be incorporated to obtain a suitable model. Our computational experiments show that the resulting MO model produces a close estimation of the network reliability, especially when the correlation between component failures is significant.



Mejia, G., & Pereira, J. (2020). Multiobjective scheduling algorithm for flexible manufacturing systems with Petri nets. J. Manuf. Syst., 54, 272–284.
Abstract: In this work, we focus on general multiobjective scheduling problems that can be modeled using a Petri net framework. Due to their generality, Petri nets are a useful abstraction that captures multiple characteristics of reallife processes. To provide a general solution procedure for the abstraction, we propose three alternative approaches using an indirect scheme to represent the solution: (1) a genetic algorithm that combines two objectives through a weighted fitness function, (2) a non dominated sorting genetic algorithm (NSGAII) that explicitly addresses the multiobjective nature of the problem and (3) a multiobjective local search approach that simultaneously explores multiple candidate solutions. These algorithms are tested in an extensive computational experiment showing the applicability of this general framework to obtain quality solutions.



Munoz, G., Espinoza, D., Goycoolea, M., Moreno, E., Queyranne, M., & Rivera Letelier, O. (2018). A study of the BienstockZuckerberg algorithm: applications in mining and resource constrained project scheduling. Comput. Optim. Appl., 69(2), 501–534.
Abstract: We study a Lagrangian decomposition algorithm recently proposed by Dan Bienstock and Mark Zuckerberg for solving the LP relaxation of a class of open pit mine project scheduling problems. In this study we show that the BienstockZuckerberg (BZ) algorithm can be used to solve LP relaxations corresponding to a much broader class of scheduling problems, including the wellknown Resource Constrained Project Scheduling Problem (RCPSP), and multimodal variants of the RCPSP that consider batch processing of jobs. We present a new, intuitive proof of correctness for the BZ algorithm that works by casting the BZ algorithm as a column generation algorithm. This analysis allows us to draw parallels with the wellknown DantzigWolfe decomposition (DW) algorithm. We discuss practical computational techniques for speeding up the performance of the BZ and DW algorithms on project scheduling problems. Finally, we present computational experiments independently testing the effectiveness of the BZ and DW algorithms on different sets of publicly available test instances. Our computational experiments confirm that the BZ algorithm significantly outperforms the DW algorithm for the problems considered. Our computational experiments also show that the proposed speedup techniques can have a significant impact on the solve time. We provide some insights on what might be explaining this significant difference in performance.

