Home | << 1 2 3 >> |
Alvarez-Miranda, E., & Pereira, J. (2017). Designing and constructing networks under uncertainty in the construction stage: Definition and exact algorithmic approach. Comput. Oper. Res., 81, 178–191.
Abstract: The present work proposes a novel Network Optimization problem whose core is to combine both network design and network construction scheduling under uncertainty into a single two-stage robust optimization model. The first-stage decisions correspond to those of a classical network design problem, while the second-stage decisions correspond to those of a network construction scheduling problem (NCS) under uncertainty. The resulting problem, which we will refer to as the Two-Stage Robust Network Design and Construction Problem (2SRNDC), aims at providing a modeling framework in which the design decision not only depends on the design costs (e.g., distances) but also on the corresponding construction plan (e.g., time to provide service to costumers). We provide motivations, mixed integer programming formulations, and an exact algorithm for the 2SRNDC. Experimental results on a large set of instances show the effectiveness of the model in providing robust solutions, and the capability of the proposed algorithm to provide good solutions in reasonable running times. (C) 2017 Elsevier Ltd. All rights reserved.
|
Alvarez-Miranda, E., Campos-Valdes, C., Quiroga, M. M., Moreno-Faguett, M., & Pereira, J. (2020). A Multi-Criteria Pen for Drawing Fair Districts: When Democratic and Demographic Fairness Matter. Mathematics, 8(9), 27 pp.
Abstract: Electoral systems are modified by individuals who have incentives to bias the rules for their political advantage (i.e., gerrymandering). To prevent gerrymandering, legislative institutions can rely on mathematical tools to guarantee democratic fairness and territorial contiguity. These tools have been successfully used in the past; however, there is a need to accommodate additional meanings of the term fairness within the electoral systems of modern democracies. In this paper, we present an optimization framework that considers multiple criteria for drawing districts and assigning the number of representatives. Besides some typical districting criteria (malapportionment and contiguity), we introduce novel criteria for ensuring territorial equilibrium and incentives for candidates to deploy their representation efforts fairly during their campaign and period in office. We test the method, which we denote as Multi-criteria Pen, in a recent and a forthcoming reform of the Chilean electoral system. The results show the potential of our tool to improve the current territorial design and offers insights on the motivations, objectives, and deficiencies of both reform plans.
|
Averbakh, I., & Pereira, J. (2018). Lateness Minimization in Pairwise Connectivity Restoration Problems. INFORMS J. Comput., 30(3), 522–538.
Abstract: A network is given whose edges need to be constructed (or restored after a disaster). The lengths of edges represent the required construction/restoration times given available resources, and one unit of length of the network can be constructed per unit of time. All points of the network are accessible for construction at any time. For each pair of vertices, a due date is given. It is required to find a construction schedule that minimizes the maximum lateness of all pairs of vertices, where the lateness of a pair is the difference between the time when the pair becomes connected by an already constructed path and the pair's due date. We introduce the problem and analyze its structural properties, present a mixed-integer linear programming formulation, develop a number of lower bounds that are integrated in a branch-and-bound algorithm, and discuss results of computational experiments both for instances based on randomly generated networks and for instances based on 2010 Chilean earthquake data.
|
Barrera, J., Moreno, E., & Munoz, G. (2022). Convex envelopes for ray-concave functions. Optim. Let., 16, 2221–2240.
Abstract: Convexification based on convex envelopes is ubiquitous in the non-linear optimization literature. Thanks to considerable efforts of the optimization community for decades, we are able to compute the convex envelopes of a considerable number of functions that appear in practice, and thus obtain tight and tractable approximations to challenging problems. We contribute to this line of work by considering a family of functions that, to the best of our knowledge, has not been considered before in the literature. We call this family ray-concave functions. We show sufficient conditions that allow us to easily compute closed-form expressions for the convex envelope of ray-concave functions over arbitrary polytopes. With these tools, we are able to provide new perspectives to previously known convex envelopes and derive a previously unknown convex envelope for a function that arises in probability contexts.
Keywords: Convex envelopes; Nonlinear programming; Convex optimization
|
Barrera, J., Moreno, E., Munoz, G., & Romero, P. (2022). Exact reliability optimization for series-parallel graphs using convex envelopes. Networks, 80(2), 235–248.
Abstract: Given its wide spectrum of applications, the classical problem of all-terminal network reliability evaluation remains a highly relevant problem in network design. The associated optimization problem-to find a network with the best possible reliability under multiple constraints-presents an even more complex challenge, which has been addressed in the scientific literature but usually under strong assumptions over failures probabilities and/or the network topology. In this work, we propose a novel reliability optimization framework for network design with failures probabilities that are independent but not necessarily identical. We leverage the linear-time evaluation procedure for network reliability in the series-parallel graphs of Satyanarayana and Wood (1985) to formulate the reliability optimization problem as a mixed-integer nonlinear optimization problem. To solve this nonconvex problem, we use classical convex envelopes of bilinear functions, introduce custom cutting planes, and propose a new family of convex envelopes for expressions that appear in the evaluation of network reliability. Furthermore, we exploit the refinements produced by spatial branch-and-bound to locally strengthen our convex relaxations. Our experiments show that, using our framework, one can efficiently obtain optimal solutions in challenging instances of this problem.
|
Beck, A. T., Ribeiro, L. D., Valdebenito, M., & Jensen, H. (2022). Risk-Based Design of Regular Plane Frames Subject to Damage by Abnormal Events: A Conceptual Study. J. Struct. Eng., 148(1), 04021229.
Abstract: Constructed facilities should be robust with respect to the loss of load-bearing elements due to abnormal events. Yet, strengthening structures to withstand such damage has a significant impact on construction costs. Strengthening costs should be justified by the threat and should result in smaller expected costs of progressive collapse. In regular frame structures, beams and columns compete for the strengthening budget. In this paper, we present a risk-based formulation to address the optimal design of regular plane frames under element loss conditions. We address the threat probabilities for which strengthening has better cost-benefit than usual design, for different frame configurations, and study the impacts of strengthening extent and cost. The risk-based optimization reveals optimum points of compromise between competing failure modes: local bending of beams, local crushing of columns, and global pancake collapse, for frames of different aspect ratios. The conceptual study is based on a simple analytical model for progressive collapse, but it provides relevant insight for the design and strengthening of real structures.
|
Behling, R., Lara, H., & Oviedo, H. (2023). Computing the completely positive factorization via alternating minimization. Numer. Linear Algebra Appl., Early Access.
Abstract: In this article, we propose a novel alternating minimization scheme for finding completely positive factorizations. In each iteration, our method splits the original factorization problem into two optimization subproblems, the first one being an orthogonal procrustes problem, which is taken over the orthogonal group, and the second one over the set of entrywise positive matrices. We present both a convergence analysis of the method and favorable numerical results.
|
Canessa, G., Moreno, E., & Pagnoncelli, B. K. (2021). The risk-averse ultimate pit problem. Optim. Eng., 22, 2655–2678.
Abstract: In this work, we consider a risk-averse ultimate pit problem where the grade of the mineral is uncertain. We derive conditions under which we can generate a set of nested pits by varying the risk level instead of using revenue factors. We propose two properties that we believe are desirable for the problem: risk nestedness, which means the pits generated for different risk aversion levels should be contained in one another, and additive consistency, which states that preferences in terms of order of extraction should not change if independent sectors of the mine are added as precedences. We show that only an entropic risk measure satisfies these properties and propose a two-stage stochastic programming formulation of the problem, including an efficient approximation scheme to solve it. We illustrate our approach in a small self-constructed example, and apply our approximation scheme to a real-world section of the Andina mine, in Chile.
Keywords: Ultimate pit; Mining; Risk-averse optimization; Integer programming
|
Cho, A. D., Carrasco, R. A., & Ruz, G. A. (2022). Improving Prescriptive Maintenance by Incorporating Post-Prognostic Information Through Chance Constraints. IEEE Access, 10, 55924–55932.
Abstract: Maintenance is one of the critical areas in operations in which a careful balance between preventive costs and the effect of failures is required. Thanks to the increasing data availability, decision-makers can now use models to better estimate, evaluate, and achieve this balance. This work presents a maintenance scheduling model which considers prognostic information provided by a predictive system. In particular, we developed a prescriptive maintenance system based on run-to-failure signal segmentation and a Long Short Term Memory (LSTM) neural network. The LSTM network returns the prediction of the remaining useful life when a fault is present in a component. We incorporate such predictions and their inherent errors in a decision support system based on a stochastic optimization model, incorporating them via chance constraints. These constraints control the number of failed components and consider the physical distance between them to reduce sparsity and minimize the total maintenance cost. We show that this approach can compute solutions for relatively large instances in reasonable computational time through experimental results. Furthermore, the decision-maker can identify the correct operating point depending on the balance between costs and failure probability.
|
Contreras, M., Pellicer, R., & Villena, M. (2017). Dynamic optimization and its relation to classical and quantum constrained systems. Physica A, 479, 12–25.
Abstract: We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop lambda-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Psi (x, t) = e(is(x,t)) in the quantum Schrodinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x, t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem. (C) 2017 Elsevier B.V. All rights reserved.
|
Dang, C., Wei, P. F., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Interval uncertainty propagation by a parallel Bayesian global optimization method. Appl. Math. Model., 108, 220–235.
Abstract: This paper is concerned with approximating the scalar response of a complex computational model subjected to multiple input interval variables. Such task is formulated as finding both the global minimum and maximum of a computationally expensive black-box function over a prescribed hyper-rectangle. On this basis, a novel non-intrusive method, called `triple-engine parallel Bayesian global optimization', is proposed. The method begins by assuming a Gaussian process prior (which can also be interpreted as a surrogate model) over the response function. The main contribution lies in developing a novel infill sampling criterion, i.e., triple-engine pseudo expected improvement strategy, to identify multiple promising points for minimization and/or maximization based on the past observations at each iteration. By doing so, these identified points can be evaluated on the real response function in parallel. Besides, another potential benefit is that both the lower and upper bounds of the model response can be obtained with a single run of the developed method. Four numerical examples with varying complexity are investigated to demonstrate the proposed method against some existing techniques, and results indicate that significant computational savings can be achieved by making full use of prior knowledge and parallel computing.
|
Escapil-Inchauspé, P., & Ruz, G. A. (2023). Hyper-parameter tuning of physics-informed neural networks: Application to Helmholtz problems. Neurocomputing, 561, 126826.
Abstract: We consider physics-informed neural networks (PINNs) (Raissiet al., 2019) for forward physical problems. In order to find optimal PINNs configuration, we introduce a hyper-parameter optimization (HPO) procedure via Gaussian processes-based Bayesian optimization. We apply the HPO to Helmholtz equation for bounded domains and conduct a thorough study, focusing on: (i) performance, (ii) the collocation points density r and (iii) the frequency kappa, confirming the applicability and necessity of the method. Numerical experiments are performed in two and three dimensions, including comparison to finite element methods.
|
González-Castillo, M., Navarrete, P., Tapia, T., Lorca, A., Olivares, D., & Negrete-Pincetic, M. (2023). Cleaning scheduling in photovoltaic solar farms with deterministic and stochastic optimization. Sustain. Energy, Grids Netw., 36, 101147.
Abstract: Soiling in solar panels causes a decrease in their ability to capturing solar irradiance, thus reducing the module's power output. To reduce losses due to soiling, the panels are cleaned. This cleaning represents a relevant share of the operation and maintenance cost for solar farms, for which there are different types of technologies available with different costs and duration. In this context, this paper proposes a method that allows scheduling the dates on which cleaning generates greater utility in terms of income from energy sales and costs associated with cleaning. For this, two optimization models that deliver a schedule of dates where the best income-cost balance is obtained, are proposed and compared: a deterministic Mixed Integer Linear Problem and a stochastic Markov Decision Process. Numerical results show that both models outperform the baseline case by similar to 4.6%. A simulator was built and both models were compared to the baseline case for 10,000 rainfall and irradiance scenarios. The stochastic model outperformed both models for all scenarios, thus proving that modeling rainfalls increases profitability in the face of uncertainty.
|
Guevara, E., Babonneau, F., Homem-de-Mello, T., & Moret, S. (2020). A machine learning and distributionally robust optimization framework for strategic energy planning under uncertainty. Appl. Energy, 271, 18 pp.
Abstract: This paper investigates how the choice of stochastic approaches and distribution assumptions impacts strategic investment decisions in energy planning problems. We formulate a two-stage stochastic programming model assuming different distributions for the input parameters and show that there is significant discrepancy among the associated stochastic solutions and other robust solutions published in the literature. To remedy this sensitivity issue, we propose a combined machine learning and distributionally robust optimization (DRO) approach which produces more robust and stable strategic investment decisions with respect to uncertainty assumptions. DRO is applied to deal with ambiguous probability distributions and Machine Learning is used to restrict the DRO model to a subset of important uncertain parameters ensuring computational tractability. Finally, we perform an out-of-sample simulation process to evaluate solutions performances. The Swiss energy system is used as a case study all along the paper to validate the approach.
|
Henriquez, P. A., & Ruz, G. A. (2019). Noise reduction for near-infrared spectroscopy data using extreme learning machines. Eng. Appl. Artif. Intell., 79, 13–22.
Abstract: The near infrared (NIR) spectra technique is an effective approach to predict chemical properties and it is typically applied in petrochemical, agricultural, medical, and environmental sectors. NIR spectra are usually of very high dimensions and contain huge amounts of information. Most of the information is irrelevant to the target problem and some is simply noise. Thus, it is not an easy task to discover the relationship between NIR spectra and the predictive variable. However, this kind of regression analysis is one of the main topics of machine learning. Thus machine learning techniques play a key role in NIR based analytical approaches. Pre-processing of NIR spectral data has become an integral part of chemometrics modeling. The objective of the pre-processing is to remove physical phenomena (noise) in the spectra in order to improve the regression or classification model. In this work, we propose to reduce the noise using extreme learning machines which have shown good predictive performances in regression applications as well as in large dataset classification tasks. For this, we use a novel algorithm called C-PL-ELM, which has an architecture in parallel based on a non-linear layer in parallel with another non-linear layer. Using the soft margin loss function concept, we incorporate two Lagrange multipliers with the objective of including the noise of spectral data. Six real-life dataset were analyzed to illustrate the performance of the developed models. The results for regression and classification problems confirm the advantages of using the proposed method in terms of root mean square error and accuracy.
|
Homem-de-Mello, T., Kong, Q. X., & Godoy-Barba, R. (2022). A Simulation Optimization Approach for the Appointment Scheduling Problem with Decision-Dependent Uncertainties. INFORMS J. Comput., Early Access.
Abstract: The appointment scheduling problem (ASP) studies how to manage patient arrivals to a healthcare system to improve system performance. An important challenge occurs when some patients may not show up for an appointment. Although the ASP is well studied in the literature, the vast majority of the existing work does not consider the well-observed phenomenon that patient no-show is influenced by the appointment time, the usual decision variable in the ASP. This paper studies the ASP with random service time (exogenous uncertainty) with known distribution and patient decision-dependent no-show behavior (endogenous uncertainty). This problem belongs to the class of stochastic optimization with decision-dependent uncertainties. Such problems are notoriously difficult as they are typically nonconvex. We propose a stochastic projected gradient path (SPGP) method to solve the problem, which requires the development of a gradient estimator of the objective function-a nontrivial task, as the literature on gradient-based optimization algorithms for problems with decision-dependent uncertainty is very scarce and unsuitable for our model. Our method can solve the ASP problem under arbitrarily smooth show-up probability functions. We present solutions under different patterns of no-show behavior and demonstrate that breaking the assumption of constant show-up probability substantially changes the scheduling solutions. We conduct numerical experiments in a variety of settings to compare our results with those obtained with a distributionally robust optimization method developed in the literature. The cost reduction obtained with our method, which we call the value of distribution information, can be interpreted as how much the system performance can be improved by knowing the distribution of the service times, compared to not knowing it. We observe that the value of distribution information is up to 31% of the baseline cost, and that such value is higher when the system is crowded or/and the waiting time cost is relatively high.
Summary of Contribution: This paper studies an appointment scheduling problem under time-dependent patient no-show behavior, a situation commonly observed in practice. The problem belongs to the class of stochastic optimization problems with decision-dependent uncertainties in the operations research literature. Such problems are notoriously difficult to solve as a result of the lack of convexity, and the present case requires different techniques because of the presence of continuous distributions for the service times. A stochastic projected gradient path method, which includes the development of specialized techniques to estimate the gradient of the objective function, is proposed to solve the problem. For problems with a similar structure, the algorithm can be applied once the gradient estimator of the objective function is obtained. Extensive numerical studies are presented to demonstrate the quality of the solutions, the importance of modeling time-dependent no-shows in appointment scheduling, and the value of using distribution information about the service times. |
Inzunza, A., Munoz, F. D., & Moreno, R. (2021). Measuring the effects of environmental policies on electricity markets risk. Energy Econ., 102, 105470.
Abstract: This paper studies how environmental policies, such as renewable portfolio standards (RPS) and carbon taxes, might contribute to reducing risk exposure in the electricity generation sector. We illustrate this effect by first computing long-term market equilibria of the Chilean generation sector for the year 2035 using a risk-averse planning model, considering uncertainty of hydrological scenarios and fossil fuel prices as well as distinct levels of risk aversion, but assuming no environmental policies in place. We then compare these risk-averse equilibria to generation portfolios obtained by imposing several levels of RPS and carbon taxes in a market with risk-neutral firms, separately. Our results show that the implementation of both policies can provide incentives for investments in portfolios of generation technologies that limit the risk exposure of the system, particularly when high levels of RPS (35%) or high carbon taxes (35 $/tonCO2) are applied. However, we find that in the case of a hydrothermal system, the resulting market equilibria under RPS policies yield expected generation cost and risk levels (i.e. standard deviation of costs) that are more similar to the efficient portfolios determined using a risk-averse planning model than the ones we find under the carbon tax.
|
Kalahasthi, L., Holguin-Veras, J., & Yushimito, W. F. (2022). A freight origin-destination synthesis model with mode choice. Transp. Res. E-Logist. Transp. Rev., 157, 102595.
Abstract: This paper develops a novel procedure to conduct a Freight Origin-Destination Synthesis (FODS) that jointly estimates the trip distribution, mode choice, and the empty trips by truck and rail that provide the best match to the observed freight traffic counts. Four models are integrated: (1) a gravity model for trip distribution, (2) a binary logit model for mode choice, (3) a Noortman and Van Es' model for truck, and (4) a Noortman and Van Es' model for rail empty trips. The estimation process entails an iterative minimization of a nonconvex objective function, the summation of squared errors of the estimated truck and rail traffic counts with respect to the five model parameters. Of the two methods tested to address the nonconvexity, an interior point method with a set of random starting points (Multi-Start algorithm) outperformed the Ordinary Least Squared (OLS) inference technique. The potential of this methodology is examined using a hypothetical example of developing a nationwide freight demand model for Bangladesh. This research improves the existing FODS techniques that use readily available secondary data such as traffic counts and link costs, allowing transportation planners to evaluate policy outcomes without needing expensive freight data collection. This paper presents the results, model validation, limitations, and future scope for improvements.
|
Lagos, F., & Pereira, J. (2023). Multi-arme d bandit-base d hyper-heuristics for combinatorial optimization problems. Eur. J. Oper. Res., 312(1), 70–91.
Abstract: There are significant research opportunities in the integration of Machine Learning (ML) methods and Combinatorial Optimization Problems (COPs). In this work, we focus on metaheuristics to solve COPs that have an important learning component. These algorithms must explore a solution space and learn from the information they obtain in order to find high-quality solutions. Among the metaheuristics, we study Hyper-Heuristics (HHs), algorithms that, given a number of low-level heuristics, iteratively select and apply heuristics to a solution. The HH we consider has a Markov model to produce sequences of low-level heuristics, which we combine with a Multi-Armed Bandit Problem (MAB)-based method to learn its parameters. This work proposes several improvements to the HH metaheuristic that yields a better learning for solving problem instances. Specifically, this is the first work in HHs to present Exponential Weights for Exploration and Exploitation (EXP3) as a learning method, an algorithm that is able to deal with adversarial settings. We also present a case study for the Vehicle Routing Problem with Time Windows (VRPTW), for which we include a list of low-level heuristics that have been proposed in the literature. We show that our algorithms can handle a large and diverse list of heuristics, illustrating that they can be easily configured to solve COPs of different nature. The computational results indicate that our algorithms are competitive methods for the VRPTW (2.16% gap on average with respect to the best known solutions), demonstrating the potential of these algorithms to solve COPs. Finally, we show how algorithms can even detect low-level heuristics that do not contribute to finding better solutions to the problem.& COPY
|
Lagos, T., Armstrong, M., Homem-de-Mello, T., Lagos, G., & Saure, D. (2021). A framework for adaptive open-pit mining planning under geological uncertainty. Optim. Eng., 72, 102086.
Abstract: Mine planning optimization aims at maximizing the profit obtained from extracting valuable ore. Beyond its theoretical complexity-the open-pit mining problem with capacity constraints reduces to a knapsack problem with precedence constraints, which is NP-hard-practical instances of the problem usually involve a large to very large number of decision variables, typically of the order of millions for large mines. Additionally, any comprehensive approach to mine planning ought to consider the underlying geostatistical uncertainty as only limited information obtained from drill hole samples of the mineral is initially available. In this regard, as blocks are extracted sequentially, information about the ore grades of blocks yet to be extracted changes based on the blocks that have already been mined. Thus, the problem lies in the class of multi-period large scale stochastic optimization problems with decision-dependent information uncertainty. Such problems are exceedingly hard to solve, so approximations are required. This paper presents an adaptive optimization scheme for multi-period production scheduling in open-pit mining under geological uncertainty that allows us to solve practical instances of the problem. Our approach is based on a rolling-horizon adaptive optimization framework that learns from new information that becomes available as blocks are mined. By considering the evolution of geostatistical uncertainty, the proposed optimization framework produces an operational policy that reduces the risk of the production schedule. Our numerical tests with mines of moderate sizes show that our rolling horizon adaptive policy gives consistently better results than a non-adaptive stochastic optimization formulation, for a range of realistic problem instances.
|