
AlvarezMiranda, E., & Pereira, J. (2017). Designing and constructing networks under uncertainty in the construction stage: Definition and exact algorithmic approach. Comput. Oper. Res., 81, 178–191.
Abstract: The present work proposes a novel Network Optimization problem whose core is to combine both network design and network construction scheduling under uncertainty into a single twostage robust optimization model. The firststage decisions correspond to those of a classical network design problem, while the secondstage decisions correspond to those of a network construction scheduling problem (NCS) under uncertainty. The resulting problem, which we will refer to as the TwoStage Robust Network Design and Construction Problem (2SRNDC), aims at providing a modeling framework in which the design decision not only depends on the design costs (e.g., distances) but also on the corresponding construction plan (e.g., time to provide service to costumers). We provide motivations, mixed integer programming formulations, and an exact algorithm for the 2SRNDC. Experimental results on a large set of instances show the effectiveness of the model in providing robust solutions, and the capability of the proposed algorithm to provide good solutions in reasonable running times. (C) 2017 Elsevier Ltd. All rights reserved.



Averbakh, I., & Pereira, J. (2018). Lateness Minimization in Pairwise Connectivity Restoration Problems. INFORMS J. Comput., 30(3), 522–538.
Abstract: A network is given whose edges need to be constructed (or restored after a disaster). The lengths of edges represent the required construction/restoration times given available resources, and one unit of length of the network can be constructed per unit of time. All points of the network are accessible for construction at any time. For each pair of vertices, a due date is given. It is required to find a construction schedule that minimizes the maximum lateness of all pairs of vertices, where the lateness of a pair is the difference between the time when the pair becomes connected by an already constructed path and the pair's due date. We introduce the problem and analyze its structural properties, present a mixedinteger linear programming formulation, develop a number of lower bounds that are integrated in a branchandbound algorithm, and discuss results of computational experiments both for instances based on randomly generated networks and for instances based on 2010 Chilean earthquake data.



Canessa, G., Moreno, E., & Pagnoncelli, B. K. (2020). The riskaverse ultimate pit problem. Optim. Eng., to appear, 24 pp.
Abstract: In this work, we consider a riskaverse ultimate pit problem where the grade of the mineral is uncertain. We derive conditions under which we can generate a set of nested pits by varying the risk level instead of using revenue factors. We propose two properties that we believe are desirable for the problem: risk nestedness, which means the pits generated for different risk aversion levels should be contained in one another, and additive consistency, which states that preferences in terms of order of extraction should not change if independent sectors of the mine are added as precedences. We show that only an entropic risk measure satisfies these properties and propose a twostage stochastic programming formulation of the problem, including an efficient approximation scheme to solve it. We illustrate our approach in a small selfconstructed example, and apply our approximation scheme to a realworld section of the Andina mine, in Chile.



Contreras, M., Pellicer, R., & Villena, M. (2017). Dynamic optimization and its relation to classical and quantum constrained systems. Physica A, 479, 12–25.
Abstract: We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two secondclass constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closedloop lambdastrategy, the optimality condition for the action gives a consistency relation, which is associated to the HamiltonJacobiBellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Psi (x, t) = e(is(x,t)) in the quantum Schrodinger equation, a nonlinear partial equation is obtained for the S function. For the righthand side quantization, this is the HamiltonJacobiBellman equation, when S(x, t) is identified with the optimal value function. Thus, the HamiltonJacobiBellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem. (C) 2017 Elsevier B.V. All rights reserved.



Guevara, E., Babonneau, F., HomemdeMello, T., & Moret, S. (2020). A machine learning and distributionally robust optimization framework for strategic energy planning under uncertainty. Appl. Energy, 271, 18 pp.
Abstract: This paper investigates how the choice of stochastic approaches and distribution assumptions impacts strategic investment decisions in energy planning problems. We formulate a twostage stochastic programming model assuming different distributions for the input parameters and show that there is significant discrepancy among the associated stochastic solutions and other robust solutions published in the literature. To remedy this sensitivity issue, we propose a combined machine learning and distributionally robust optimization (DRO) approach which produces more robust and stable strategic investment decisions with respect to uncertainty assumptions. DRO is applied to deal with ambiguous probability distributions and Machine Learning is used to restrict the DRO model to a subset of important uncertain parameters ensuring computational tractability. Finally, we perform an outofsample simulation process to evaluate solutions performances. The Swiss energy system is used as a case study all along the paper to validate the approach.



Henriquez, P. A., & Ruz, G. A. (2019). Noise reduction for nearinfrared spectroscopy data using extreme learning machines. Eng. Appl. Artif. Intell., 79, 13–22.
Abstract: The near infrared (NIR) spectra technique is an effective approach to predict chemical properties and it is typically applied in petrochemical, agricultural, medical, and environmental sectors. NIR spectra are usually of very high dimensions and contain huge amounts of information. Most of the information is irrelevant to the target problem and some is simply noise. Thus, it is not an easy task to discover the relationship between NIR spectra and the predictive variable. However, this kind of regression analysis is one of the main topics of machine learning. Thus machine learning techniques play a key role in NIR based analytical approaches. Preprocessing of NIR spectral data has become an integral part of chemometrics modeling. The objective of the preprocessing is to remove physical phenomena (noise) in the spectra in order to improve the regression or classification model. In this work, we propose to reduce the noise using extreme learning machines which have shown good predictive performances in regression applications as well as in large dataset classification tasks. For this, we use a novel algorithm called CPLELM, which has an architecture in parallel based on a nonlinear layer in parallel with another nonlinear layer. Using the soft margin loss function concept, we incorporate two Lagrange multipliers with the objective of including the noise of spectral data. Six reallife dataset were analyzed to illustrate the performance of the developed models. The results for regression and classification problems confirm the advantages of using the proposed method in terms of root mean square error and accuracy.



Ljubic, I., & Moreno, E. (2018). Outer approximation and submodular cuts for maximum capture facility location problems with random utilities. Eur. J. Oper. Res., 266(1), 46–56.
Abstract: We consider a family of competitive facility location problems in which a “newcomer” company enters the market and has to decide where to locate a set of new facilities so as to maximize its market share. The multinomial logit model is used to estimate the captured customer demand. We propose a first branchandcut approach for this family of difficult mixedinteger nonlinear problems. Our approach combines two types of cutting planes that exploit particular properties of the objective function: the first one are the outerapproximation cuts and the second one are the submodular cuts. The approach is computationally evaluated on three datasets from the recent literature. The obtained results show that our new branchandcut drastically outperforms stateoftheart exact approaches, both in terms of the computing times, and in terms of the number of instances solved to optimality. (C) 2017 Elsevier B.V. All rights reserved.



Matus, O., Barrera, J., Moreno, E., & Rubino, G. (2019). On the MarshallOlkin Copula Model for Network Reliability Under Dependent Failures. IEEE Trans. Reliab., 68(2), 451–461.
Abstract: The MarshallOlkin (MO) copulamodel has emerged as the standard tool for capturing dependence between components in failure analysis in reliability. In this model, shocks arise at exponential random times, that affect one or several components inducing a natural correlation in the failure process. However, because the number of parameter of the model grows exponentially with the number of components, MO suffers of the “curse of dimensionality.” MO models are usually intended to be applied to design a network before its construction; therefore, it is natural to assume that only partial information about failure behavior can be gathered, mostly from similar existing networks. To construct such an MO model, we propose an optimization approach to define the shock's parameters in the MO copula, in order to match marginal failures probabilities and correlations between these failures. To deal with the exponential number of parameters of this problem, we use a columngeneration technique. We also discuss additional criteria that can be incorporated to obtain a suitable model. Our computational experiments show that the resulting MO model produces a close estimation of the network reliability, especially when the correlation between component failures is significant.



Mejia, G., & Pereira, J. (2020). Multiobjective scheduling algorithm for flexible manufacturing systems with Petri nets. J. Manuf. Syst., 54, 272–284.
Abstract: In this work, we focus on general multiobjective scheduling problems that can be modeled using a Petri net framework. Due to their generality, Petri nets are a useful abstraction that captures multiple characteristics of reallife processes. To provide a general solution procedure for the abstraction, we propose three alternative approaches using an indirect scheme to represent the solution: (1) a genetic algorithm that combines two objectives through a weighted fitness function, (2) a non dominated sorting genetic algorithm (NSGAII) that explicitly addresses the multiobjective nature of the problem and (3) a multiobjective local search approach that simultaneously explores multiple candidate solutions. These algorithms are tested in an extensive computational experiment showing the applicability of this general framework to obtain quality solutions.



Munoz, G., Espinoza, D., Goycoolea, M., Moreno, E., Queyranne, M., & Rivera Letelier, O. (2018). A study of the BienstockZuckerberg algorithm: applications in mining and resource constrained project scheduling. Comput. Optim. Appl., 69(2), 501–534.
Abstract: We study a Lagrangian decomposition algorithm recently proposed by Dan Bienstock and Mark Zuckerberg for solving the LP relaxation of a class of open pit mine project scheduling problems. In this study we show that the BienstockZuckerberg (BZ) algorithm can be used to solve LP relaxations corresponding to a much broader class of scheduling problems, including the wellknown Resource Constrained Project Scheduling Problem (RCPSP), and multimodal variants of the RCPSP that consider batch processing of jobs. We present a new, intuitive proof of correctness for the BZ algorithm that works by casting the BZ algorithm as a column generation algorithm. This analysis allows us to draw parallels with the wellknown DantzigWolfe decomposition (DW) algorithm. We discuss practical computational techniques for speeding up the performance of the BZ and DW algorithms on project scheduling problems. Finally, we present computational experiments independently testing the effectiveness of the BZ and DW algorithms on different sets of publicly available test instances. Our computational experiments confirm that the BZ algorithm significantly outperforms the DW algorithm for the problems considered. Our computational experiments also show that the proposed speedup techniques can have a significant impact on the solve time. We provide some insights on what might be explaining this significant difference in performance.



Ozdemir, O., Munoz, F. D., Ho, J. L., & Hobbs, B. F. (2016). Economic Analysis of Transmission Expansion Planning With PriceResponsive Demand and Quadratic Losses by Successive LP. IEEE Trans. Power Syst., 31(2), 1096–1107.
Abstract: The growth of demand response programs and renewable generation is changing the economics of transmission. Planners and regulators require tools to address the implications of possible technology, policy, and economic developments for the optimal configuration of transmission grids. We propose a model for economic evaluation and optimization of interregional transmission expansion, as well as the optimal response of generators' investments to locational incentives, that accounts for Kirchhoff's laws and three important nonlinearities. The first is consumer response to energy prices, modeled using elastic demand functions. The second is resistance losses. The third is the product of line susceptance and flows in the linearized DC load flow model. We develop a practical method combining Successive Linear Programming with GaussSeidel iteration to cooptimize AC and DC transmission and generation capacities in a linearized DC network while considering hundreds of hourly realizations of renewable supply and load. We test our approach for a European electricity market model including 33 countries. The examples indicate that demand response can be a valuable resource that can significantly affect the economics, location, and amounts of transmission and generation investments. Further, representing losses and Kirchhoff's laws is also important in transmission policy analyses.



Pereira, J. (2016). The robust (minmax regret) single machine scheduling with interval processing times and total weighted completion time objective. Comput. Oper. Res., 66, 141–152.
Abstract: Single machine scheduling is a classical optimization problem that depicts multiple real life systems in which a single resource (the machine) represents the whole system or the bottleneck operation of the system. In this paper we consider the problem under a weighted completion time performance metric in which the processing time of the tasks to perform (the jobs) are uncertain, but can only take values from closed intervals. The objective is then to find a solution that minimizes the maximum absolute regret for any possible realization of the processing times. We present an exact branchandbound method to solve the problem, and conduct a computational experiment to ascertain the possibilities and limitations of the proposed method. The results show that the algorithm is able to optimally solve instances of moderate size (2540 jobs depending on the characteristics of the instance). (c) 2015 Elsevier Ltd. All rights reserved.



Pereira, J. (2018). The robust (minmax regret) assembly line worker assignment and balancing problem. Comput. Oper. Res., 93, 27–40.
Abstract: Line balancing aims to assign the assembly tasks to the stations that compose the assembly line. A recent body of literature has been devoted to heterogeneity in the assembly process introduced by different workers. In such an environment, task times depend on the worker performing the operation and the problem aims at assigning tasks and workers to stations in order to maximize the throughput of the line. In this work, we consider an interval data version of the assembly line worker assignment and balancing problem (ALWABP) in which it is assumed that lower and upper bounds for the task times are known, and the objective is to find an assignment of tasks and workers to the workstations such that the absolute maximum regret among all of the possible scenarios is minimized. The relationship with other interval data minmax regret (IDMR) problems is investigated, the inapplicability of previous approximation methods is studied, regret evaluation is considered, and exact and heuristic solution methods are proposed and analyzed. The results of the proposed methods are compared in a computational experiment, showing the applicability of the method and the theoretical results to solve the problem under study. Additionally, these results are not only applicable to the problem in hand, but also to a more general class of problems. (C) 2018 Elsevier Ltd. All rights reserved.



Pereira, J., & AlvarezMiranda, E. (2018). An exact approach for the robust assembly line balancing problem. OmegaInt. J. Manage. Sci., 78, 85–98.
Abstract: This work studies an assembly line balancing problem with uncertainty on the task times. In order to deal with the uncertainty, a robust formulation to handle changes in the operation times is put forward. In order to solve the problem, several lower bounds, dominance rules and an enumeration procedure are proposed. These methods are tested in a computational experiment using different instances derived from the literature and then compared to similar previous approaches. The results of the experiment show that the method is able to solve larger instances in shorter running times. Furthermore, the cost of protecting a solution against uncertainty is also investigated. The results highlight that protecting an assembly line against moderate levels of uncertainty can be achieved at the expense of small quantities of additional resources (stations). (C) 2017 Elsevier Ltd. All rights reserved.



Reus, L., & Fabozzi, F. J. Robust Solutions to the LifeCycle Consumption Problem. Comput. Econ., , 19 pp.
Abstract: This paper demonstrates how the wellknown discrete lifecycle consumption problem (LCP) can be solved using the Robust Counterpart (RC) formulation technique, as defined in BenTal and Nemirovski (Math Oper Res 23(4):769805, 1998). To do this, we propose a methodology that involves applying a change of variables over the original consumption before deriving the RC. These transformations allow deriving a closed solution to the inner problem, and thus to solve the LCP without facing the curse of dimensionality and without needing to specify the prior distribution for the investment opportunity set. We generalize the methodology and illustrate how it can be used to solve other type of problems. The results show that our methodology enables solving longterm instances of the LCP (30 years). We also show it provides an alternative consumption pattern as to the one provided by a benchmark that uses a dynamic programming approach. Rather than finding a consumption that maximizes the expected lifetime utility, our solution delivers higher utilities for worstcase scenarios of future returns.



Reus, L., & Mulvey, J. M. (2016). Dynamic allocations for currency futures under switching regimes signals. Eur. J. Oper. Res., 253(1), 85–93.
Abstract: Over the last decades, speculative investors in the FX market have profited in the well known currency carry trade strategy (CT). However, during currencies or global financial crashes, CT produces substantial losses. In this work we present a methodology that enhances CT performance significantly. For our final strategy, constructed backtests show that the meansemivolatility ratio can be more than doubled with respect to benchmark CT. To do the latter, we first identify and classify CT returns according to their behavior in different regimes, using a Hidden Markov Model (HMM). The model helps to determine when to open and close positions, depending whether the regime is favorable to CT or not. Finally we employ a meansemivariance allocation model to improve allocations when positions are opened. (C) 2016 Elsevier B.V. All rights reserved.



Reus, L., Munoz, F. D., & Moreno, R. (2018). Retail consumers and risk in centralized energy auctions for indexed longterm contracts in Chile. Energy Policy, 114, 566–577.
Abstract: Centralized energy auctions for longterm contracts are commonlyused mechanisms to ensure supply adequacy, to promote competition, and to protect retail customers from price spikes in Latin America. In Chile, the law mandates that all distribution companies must hold longterm contracts – which are awarded on a competitive centralized auction – to cover 100% of the projected demand from three to fifteen years into the future. These contracts can be indexed to a series of financial parameters, including fossil fuel prices at reference locations. Drawing from portfolio theory, we use a simple example to illustrate the difficulties of selecting, through the current clearing mechanism that focuses on average costs and individual characteristics of the offers, a portfolio of longterm energy contracts that could simultaneously minimize the expected future cost of energy and limit the risk exposure of retail customers. In particular, we show that if the objective of the regulator is to limit the risk to regulated consumers, it could be optimal to include contracts that would not be selected based on individual characteristics of the offers and a leastcost auction objective, but that could significantly reduce the price variance of the overall portfolio due to diversification effects between indexing parameters.



Rojas, F., & Leiva, V. (2016). Inventory management in food companies with statistically dependent demand. Acad.Rev. Latinoam. Adm., 29(4), 450–485.
Abstract: Purpose – The objective of this paper is to propose a methodology based on random demand inventory models and dependence structures for a set of raw materials, referred to as “components”, used by food services that produce food rations referred to as “menus”. Design/methodology/approach – The contribution margins of food services that produce menus are optimised using random dependent demand inventory models. The statistical dependence between the demand for components and/or menus is incorporated into the model through the multivariate Gaussian (or normal) distribution. The contribution margins are optimised by using probabilistic inventory models for each component and stochastic programming with a differential evolution algorithm. Findings – When compared to the nonoptimised system previously used by the company, the (average) expected contribution margin increases by 18.32 per cent when using a continuous review inventory model for groceries and uniperiodic models for perishable components (optimised system). Research limitations/implications – The multivariate modeling can be improved by using (a) other nonGaussian (marginal) univariate probability distributions, by means of the copula method that considers more complex statistical dependence structures; (b) timedependence, through autoregressive timeseries structures and moving average; (c) random modelling of leadtime; and (d) demands for components with values equal to zero using zeroinflated or adjusted probability distribution. Practical implications – Professional management of the supply chain allows the users to register data concerning component identification, demand, and stock levels to subsequently be used with the proposed methodology, which must be implemented computationally. Originality/value – The proposed multivariate methodology allows it to describe demand dependence structures through inventory models applicable to components used to produce menus in food services.



SilvaOelker, G., JerezHanckes, C., & Fay, R. (2019). Hightemperature tungstenhafnia optimized selective thermal emitters for thermophotovoltaic applications. J. Quant. Spectrosc. Radiat. Transf., 231, 61–68.
Abstract: Tungstenhafnia (WHfO2) selective thermal emitters with high hemispherical emittance for thermophotovoltaic (TPV) applications are explored through numerical simulations. Two structures were analyzed: a planar multilayer stack and a grating. In both cases, through suitable design choices high thermal emittance with low directional sensitivity can be obtained. The designs are obtained by optimization of the structures using a genetic algorithm and a suitable cost function, along with simulations of the structures' emittance by using rigorous coupled wave analysis. Calculations show that these optimized structures possess high hemispherical thermal emittance for the wavelength range that matches the optical response of GaSb photovoltaic cells. For each structure, both the output power from the TPV cell and the conversion efficiency are studied as a function of emitter temperature and physical understanding of the optimized structures is developed. (C) 2019 Elsevier Ltd. All rights reserved.



Yushimito, W. F., Ban, X. G., & HolguinVeras, J. (2014). A TwoStage Optimization Model for Staggered Work Hours. J. Intell. Transport. Syst., 18(4), 410–425.
Abstract: Traditional or standard work schedules refer to the requirement that workers must be at work the same days and during the same hours each day. This requirement constrains workrelated trip arrivals, and generates morning and afternoon peak hours due to the concentration of work days and/or work hours. Alternative work schedules seek to reschedule work activities away from this traditional requirement. The aim is to flatten the peak hours by spreading the demand (i.e., assigning it to the shoulders of the peak hour), lowering the peak demand. This not only would reduce societal costs but also can help to minimize the physical requirements. In this article, a twostage optimization model is presented to quantify the effects of staggered work hours under incentive policies. In the first stage, a variation of the generalized quadratic assignment problem is used to represent the firm's assignment of workers to different work starting times. This is the input of a nonlinear complementarity problem that captures the behavior of the users of the transportation network who are seeking to overcome the constraints imposed by working schedules (arrival times). Two examples are provided to show how the model can be used to (a) quantify the effects and response of the firm to external incentives and (b) evaluate what type of arrangements in starting times are to be made in order to achieve a social optimum.

