Home | << 1 2 >> |
![]() |
Allende, H., Bravo, D., & Canessa, E. (2010). Robust design in multivariate systems using genetic algorithms. Qual. Quant., 44(2), 315–332.
Abstract: This paper presents a methodology based oil genetic algorithms, which finds feasible and reasonably adequate Solutions to problems of robust design in multivariate systems. We use a genetic algorithm to determine the appropriate control factor levels for simultaneously optimizing all of the responses of the system, considering the noise factors which affect it. The algorithm is guided by a desirability function which works with only one fitness function although the system May have many responses. We validated the methodology using data obtained from a real system and also from a process simulator, considering univariate and multivariate systems. In all cases, the methodology delivered feasible solutions, which accomplished the goals of robust design: obtain responses very close to the target values of each of them, and with minimum variability. Regarding the adjustment of the mean of each response to the target value, the algorithm performed very well. However, only in some of the multivariate cases, the algorithm was able to significantly reduce the variability of the responses.
|
Alvarez-Miranda, E., & Pereira, J. (2017). Designing and constructing networks under uncertainty in the construction stage: Definition and exact algorithmic approach. Comput. Oper. Res., 81, 178–191.
Abstract: The present work proposes a novel Network Optimization problem whose core is to combine both network design and network construction scheduling under uncertainty into a single two-stage robust optimization model. The first-stage decisions correspond to those of a classical network design problem, while the second-stage decisions correspond to those of a network construction scheduling problem (NCS) under uncertainty. The resulting problem, which we will refer to as the Two-Stage Robust Network Design and Construction Problem (2SRNDC), aims at providing a modeling framework in which the design decision not only depends on the design costs (e.g., distances) but also on the corresponding construction plan (e.g., time to provide service to costumers). We provide motivations, mixed integer programming formulations, and an exact algorithm for the 2SRNDC. Experimental results on a large set of instances show the effectiveness of the model in providing robust solutions, and the capability of the proposed algorithm to provide good solutions in reasonable running times. (C) 2017 Elsevier Ltd. All rights reserved.
|
Averbakh, I., & Pereira, J. (2021). Tree optimization based heuristics and metaheuristics in network construction problems. Comput. Oper. Res., 128, 105190.
Abstract: We consider a recently introduced class of network construction problems where edges of a transportation network need to be constructed by a server (construction crew). The server has a constant construction speed which is much lower than its travel speed, so relocation times are negligible with respect to construction times. It is required to find a construction schedule that minimizes a non-decreasing function of the times when various connections of interest become operational. Most problems of this class are strongly NP-hard on general networks, but are often tree-efficient, that is, polynomially solvable on trees. We develop a generic local search heuristic approach and two metaheuristics (Iterated Local Search and Tabu Search) for solving tree-efficient network construction problems on general networks, and explore them computationally. Results of computational experiments indicate that the methods have excellent performance.
Keywords: Network design; Scheduling; Network construction; Heuristics; Metaheuristics; Local search
|
Averbakh, I., & Pereira, J. (2018). Lateness Minimization in Pairwise Connectivity Restoration Problems. INFORMS J. Comput., 30(3), 522–538.
Abstract: A network is given whose edges need to be constructed (or restored after a disaster). The lengths of edges represent the required construction/restoration times given available resources, and one unit of length of the network can be constructed per unit of time. All points of the network are accessible for construction at any time. For each pair of vertices, a due date is given. It is required to find a construction schedule that minimizes the maximum lateness of all pairs of vertices, where the lateness of a pair is the difference between the time when the pair becomes connected by an already constructed path and the pair's due date. We introduce the problem and analyze its structural properties, present a mixed-integer linear programming formulation, develop a number of lower bounds that are integrated in a branch-and-bound algorithm, and discuss results of computational experiments both for instances based on randomly generated networks and for instances based on 2010 Chilean earthquake data.
|
Barrera, J., Cancela, H., & Moreno, E. (2015). Topological optimization of reliable networks under dependent failures. Oper. Res. Lett., 43(2), 132–136.
Abstract: We address the design problem of a reliable network. Previous work assumes that link failures are independent. We discuss the impact of dropping this assumption. We show that under a common-cause failure model, dependencies between failures can affect the optimal design. We also provide an integer-programming formulation to solve this problem. Furthermore, we discuss how the dependence between the links that participate in the solution and those that do not can be handled. Other dependency models are discussed as well. (C) 2014 Elsevier B.V. All rights reserved.
|
Beck, A. T., Ribeiro, L. D., Valdebenito, M., & Jensen, H. (2022). Risk-Based Design of Regular Plane Frames Subject to Damage by Abnormal Events: A Conceptual Study. J. Struct. Eng., 148(1), 04021229.
Abstract: Constructed facilities should be robust with respect to the loss of load-bearing elements due to abnormal events. Yet, strengthening structures to withstand such damage has a significant impact on construction costs. Strengthening costs should be justified by the threat and should result in smaller expected costs of progressive collapse. In regular frame structures, beams and columns compete for the strengthening budget. In this paper, we present a risk-based formulation to address the optimal design of regular plane frames under element loss conditions. We address the threat probabilities for which strengthening has better cost-benefit than usual design, for different frame configurations, and study the impacts of strengthening extent and cost. The risk-based optimization reveals optimum points of compromise between competing failure modes: local bending of beams, local crushing of columns, and global pancake collapse, for frames of different aspect ratios. The conceptual study is based on a simple analytical model for progressive collapse, but it provides relevant insight for the design and strengthening of real structures.
|
Bhat, S. M., Ahmed, S., Bahar, A. N., Wahid, K. A., Otsuki, A., & Singh, P. (2023). Design of Cost-Efficient SRAM Cell in Quantum Dot Cellular Automata Technology. Electronics, 12(2), 367.
Abstract: SRAM or Static Random-Access Memory is the most vital memory technology. SRAM is fast and robust but faces design challenges in nanoscale CMOS such as high leakage, power consumption, and reliability. Quantum-dot Cellular Automata (QCA) is the alternative technology that can be used to address the challenges of conventional SRAM. In this paper, a cost-efficient single layer SRAM cell has been proposed in QCA. The design has 39 cells with a latency of 1.5 clock cycles and achieves an overall improvement in cell count, area, latency, and QCA cost compared to the reported designs. It can therefore be used to design nanoscale memory structures of higher order.
Keywords: QCA cell; memory cell; QCADesigner; low power dissipation; cost-efficient
|
Canessa, E., & Chaigneau, S. (2017). Response surface methodology for estimating missing values in a pareto genetic algorithm used in parameter design. Ing. Invest., 37(2), 89–98.
Abstract: We present an improved Pareto Genetic Algorithm (PGA), which finds solutions to problems of robust design in multi-response systems with 4 responses and as many as 10 control and 5 noise factors. Because some response values might not have been obtained in the robust design experiment and are needed in the search process, the PGA uses Response Surface Methodology (RSM) to estimate them. Not only the PGA delivered solutions that adequately adjusted the response means to their target values, and with low variability, but also found more Pareto efficient solutions than a previous version of the PGA. This improvement makes it easier to find solutions that meet the trade-off among variance reduction, mean adjustment and economic considerations. Furthermore, RSM allows estimating outputs' means and variances in highly non-linear systems, making the new PGA appropriate for such systems.
|
Canessa, E., Droop, C., & Allende, H. (2012). An improved genetic algorithm for robust design in multivariate systems. Qual. Quant., 46(2), 665–678.
Abstract: In a previous article, we presented a genetic algorithm (GA), which finds solutions to problems of robust design in multivariate systems. Based on that GA, we developed a new GA that uses a new desirability function, based on the aggregation of the observed variance of the responses and the squared deviation between the mean of each response and its corresponding target value. Additionally, we also changed the crossover operator from a one-point to a uniform one. We used three different case studies to evaluate the performance of the new GA and also to compare it with the original one. The first case study involved using data from a univariate real system, and the other two employed data obtained from multivariate process simulators. In each of the case studies, the new GA delivered good solutions, which simultaneously adjusted the mean of each response to its corresponding target value. This performance was similar to the one of the original GA. Regarding variability reduction, the new GA worked much better than the original one. In all the case studies, the new GA delivered solutions that simultaneously decreased the standard deviation of each response to almost the minimum possible value. Thus, we conclude that the new GA performs better than the original one, especially regarding variance reduction, which was the main problem exhibited by the original GA.
|
Canessa, E., Vera, S., & Allende, H. (2012). A new method for estimating missing values for a genetic algorithm used in robust design. Eng. Optimiz., 44(7), 787–800.
Abstract: This article presents an improved genetic algorithm (GA), which finds solutions to problems of robust design in multivariate systems with many control and noise factors. Since some values of responses of the system might not have been obtained from the robust design experiment, but may be needed in the search process, the GA uses response surface methodology (RSM) to estimate those values. In all test cases, the GA delivered solutions that adequately adjusted the mean of the responses to their corresponding target values and with low variability. The GA found more solutions than the previous versions of the GA, which makes it easier to find a solution that may meet the trade-off among variance reduction, mean adjustment and economic considerations. Moreover, RSM is a good method for estimating the mean and variance of the outputs of highly non-linear systems, which makes the new GA appropriate for optimizing such systems.
|
Cardenas, C., Guzman, F., Carmona, M., Munoz, C., Nilo, L., Labra, A., et al. (2020). Synthetic Peptides as a Promising Alternative to Control Viral Infections in Atlantic Salmon. Pathogens, 9(8), 600.
Abstract: Viral infections in salmonids represent an ongoing challenge for the aquaculture industry. Two RNA viruses, the infectious pancreatic necrosis virus (IPNV) and the infectious salmon anemia virus (ISAV), have become a latent risk without healing therapies available for either. In this context, antiviral peptides emerge as effective and relatively safe therapeutic molecules. Based on in silico analysis of VP2 protein from IPNV and the RNA-dependent RNA polymerase from ISAV, a set of peptides was designed and were chemically synthesized to block selected key events in their corresponding infectivity processes. The peptides were tested in fish cell lines in vitro, and four were selected for decreasing the viral load: peptide GIM182 for IPNV, and peptides GIM535, GIM538 and GIM539 for ISAV. In vivo tests with the IPNV GIM 182 peptide were carried out using Salmo salar fish, showing a significant decrease of viral load, and proving the safety of the peptide for fish. The results indicate that the use of peptides as antiviral agents in disease control might be a viable alternative to explore in aquaculture.`
Keywords: interfering peptides; viral treatment; RNA fish virusesViral infections in salmonids represent an ongoing challenge for the aquaculture industry. Two RNA viruses, the infectious pancreatic necrosis virus (IPNV) and the infectious salmon anemia virus (ISAV), have become a latent risk without healing therapies available for either. In this context, antiviral peptides emerge as effective and relatively safe therapeutic molecules. Based on in silico analysis of VP2 protein from IPNV and the RNA-dependent RNA polymerase from ISAV, a set of peptides was designed and were chemically synthesized to block selected key events in their corresponding infectivity processes. The peptides were tested in fish cell lines in vitro, and four were selected for decreasing the viral load: peptide GIM182 for IPNV, and peptides GIM535, GIM538 and GIM539 for ISAV. In vivo tests with the IPNV GIM 182 peptide were carried out using Salmo salar fish, showing a significant decrease of viral load, and proving the safety of the peptide for fish. The results indicate that the use of peptides as antiviral agents in disease control might be a viable alternative to explore in aquaculture.
|
Cardu, M., Godio, A., Oggeri, C., & Seccatore, J. (2022). The influence of rock mass fracturing on splitting and contour blasts. Geomech. Geoengin., 17(3), 822–833.
Abstract: Splitting and contour blasting are aimed to achieve suitable profiles by cutting along a surface, while common blasting is intended to detach and to fragment relevant rock volumes by increasing the fracturing state. These techniques are adopted in both underground works (tunnels, caverns, quarries) and also for surface excavations (quarries, mines, rock slopes engineering). Contour blasts are widely used techniques in mining and civil engineering to enhance performance while maintaining the safety of personnel and infrastructure. Splitting blasts are mainly used in dimension stone mining to obtain intact blocks of valuable ornamental stone. The parameters of controlled blasting (geometry, charge, blast agent) require an accurate selection using optimised blasting patterns and explosive properties; most of the proposed methods are limited and unsatisfactory due to insufficient consideration of rock mass properties. A quick but effective comparison and analysis of the different characteristics of the rock mass and its heterogeneities is presented, as it indicates a better strategy to determine a tailored blasting design for a given site, thus significantly improving the contour blasting quality.
|
Chaigneau, S. E., Puebla, G., & Canessa, E. C. (2016). Why the designer's intended function is central for proper function assignment and artifact conceptualization: Essentialist and normative accounts. Dev. Rev., 41, 38–50.
Abstract: People tend to think that the function intended by an artifact's designer is its real or proper function. Relatedly, people tend to classify artifacts according to their designer's intended function (DIF), as opposed to an alternative opportunistic function. This centrality of DIF has been shown in children from 6 years of age to adults, and it is not restricted to Western societies. We review four different explanations for the centrality of DIF, integrating developmental and adult data. Two of these explanations are essentialist accounts (causal and intentional essentialism). Two of them are normative accounts (conventional function and idea ownership). Though essentialist accounts have been very influential, we review evidence that shows their limitations. Normative accounts have been less predominant. We review evidence to support them, and discuss how they account for the data. In particular, we review evidence suggesting that the centrality of DIF can be explained as a case of idea ownership. This theory makes sense of a great deal of the existing data on the subject, reconciles contradictory results, links this line of work to other literatures, and offers an account of the observed developmental trend. (C) 2016 Elsevier Inc. All rights reserved.
Keywords: Artifacts; Function; Design; Essentialism; Ownership
|
Christie, D. A., E.K.H., Innes, H., Noti, P. A., Charnay, B., Fauchez, T. J., et al. (2022). CAMEMBERT: A Mini-Neptunes General Circulation Model Intercomparison, Protocol Version 1.0.A CUISINES Model Intercomparison Project. Planet. Sci., 3(11), 261.
Abstract: With an increased focus on the observing and modeling of mini-Neptunes, there comes a need to better understand the tools we use to model their atmospheres. In this Paper, we present the protocol for the Comparing Atmospheric Models of Extrasolar Mini-Neptunes Building and Envisioning Retrievals and Transits, CAMEMBERT, project, an intercomparison of general circulation models (GCMs) used by the exoplanetary science community to simulate the atmospheres of mini-Neptunes. We focus on two targets well studied both observationally and theoretically with planned JWST cycle 1 observations: the warm GJ 1214b and the cooler K2-18b. For each target, we consider a temperature-forced case, a clear sky dual-gray radiative transfer case, and a clear sky multiband radiative transfer case, covering a range of complexities and configurations where we know differences exist between GCMs in the literature. This Paper presents all the details necessary to participate in the intercomparison, with the intention of presenting the results in future papers. Currently, there are eight GCMs participating (ExoCAM, Exo-FMS, FMS PCM, Generic PCM, MITgcm, RM-GCM, THOR, and the Unified Model), and membership in the project remains open. Those interested in participating are invited to contact the authors.
Keywords: ATMOSPHERIC CIRCULATION; DYNAMICAL CORES; HABITABLE-ZONE; TEMPERATE; PLANETS; DESIGN; CLOUDS; K2-18B; SUITE; HARPS
|
Diaz, G., Munoz, F. D., & Moreno, R. (2020). Equilibrium Analysis of a Tax on Carbon Emissions with Pass-through Restrictions and Side-payment Rules. Energy J., 41(2), 93–122.
Abstract: Chile was the first country in Latin America to impose a tax on carbon-emitting electricity generators. However, the current regulation does not allow firms to include emission charges as costs for the dispatch and pricing of electricity in real time. The regulation also includes side-payment rules to reduce the economic losses of some carbon-emitting generating units. In this paper we develop an equilibrium model with endogenous investments in generation capacity to quantify the long-run economic inefficiencies of an emissions policy with such features in a competitive setting. We benchmark this policy against a standard tax on carbon emissions and a cap-and-trade program. Our results indicate that a carbon tax with such features can, at best, yield some reductions in carbon emissions at a much higher cost than standard emission policies. These findings highlight the critical importance of promoting short-run efficiency by pricing carbon emissions in the spot market in order to incentivize efficient investments in generating capacity in the long run.
Keywords: Carbon tax; Equilibrium modeling; Market design
|
Duran, M., Godoy, E., Roman-Catafau, E., & Toledo, P. A. (2022). Open-pit slope design using a DtN-FEM: Parameter space exploration. Int. J. Rock Mech. Min. Sci., 149, 104950.
Abstract: Given the sustained mineral-deposits ore-grade decrease, it becomes necessary to reach greater depths when extracting ore by open-pit mining. Steeper slope angles are thus likely to be required, leading to geomechanical instabilities. In order to determine excavation stability, mathematical modelling and numerical simulation are often used to compute the rock-mass stress-state, to which some stability criterion needs to be added. A problem with this approach is that the volume surrounding the excavation has no clear borders and in practice it might be regarded as an unbounded region. Then, it is necessary to use advanced methods capable of dealing efficiently with this difficulty. In this work, a DtN-FEM procedure is applied to calculate displacements and stresses in open-pit slopes under geostatic stress conditions. This procedure was previously devised by the authors to numerically treat this kind of problems where the surrounding domain is semi-infinite. Its efficiency makes possible to simulate, in a short amount of time, multiple open-pit slope configurations. Therefore, the method potentiality for open-pit slope design is investigated. A regular open-pit slope geometry is assumed, parameterised by the overall-slope and bench-face angles. Multiple geometrically admissible slopes are explored and their stability is assessed by using the computed stress-field and the Mohr-Coulomb failure criterion. Regions of stability and instability are thus explored in the parametric space, opening the way for a new and flexible designing tool for open-pit slopes and related problems.
Keywords: Dirichlet-to-Neumann map; Finite elements; Open-pit; Slope design
|
Faes, M. G. R., Valdebenito, M. A., Yuan, X. K., Wei, P. F., & Beer, M. (2021). Augmented reliability analysis for estimating imprecise first excursion probabilities in stochastic linear dynamics. Adv. Eng. Softw., 155, 102993.
Abstract: Imprecise probability allows quantifying the level of safety of a system taking into account the effect of both aleatory and epistemic uncertainty. The practical estimation of an imprecise probability is usually quite demanding from a numerical viewpoint, as it is necessary to propagate separately both types of uncertainty, leading in practical cases to a nested implementation in the so-called double loop approach. In view of this issue, this contribution presents an alternative approach that avoids the double loop by replacing the imprecise probability problem by an augmented, purely aleatory reliability analysis. Then, with the help of Bayes' theorem, it is possible to recover an expression for the failure probability as an explicit function of the imprecise parameters from the augmented reliability problem, which ultimately allows calculating the imprecise probability. The implementation of the proposed framework is investigated within the context of imprecise first excursion probability estimation of uncertain linear structures subject to imprecisely defined stochastic quantities and crisp stochastic loads. The associated augmented reliability problem is solved within the context of Directional Importance Sampling, leading to an improved accuracy at reduced numerical costs. The application of the proposed approach is investigated by means of two examples. The results obtained indicate that the proposed approach can be highly efficient and accurate.
Keywords: FAILURE PROBABILITY; SYSTEMS SUBJECT; INTERVAL; QUANTIFICATION; DESIGN
|
Gaona, J., Hernández, R., Guevara, F., & Bravo, V. (2022). Influence of a Function’s Coefficients and Feedback of the Mathematical Work When Reading a Graph in an Online Assessment System. Int. J. Emerg. Technol. Learn., 17(20), 77–98.
Abstract: This paper shows the results of an experiment applied to 170
students from two Chilean universities who solve a task about reading a graph of an affine function in an online assessment environment where the parameters (coefficients of the graphed affine function) are randomly defined from an ad-hoc algorithm, with automatic correction and automatic feedback. We distinguish two versions: one of them with integer coefficients and the other one with decimal coefficients in the affine function. We observed that the nature of the coefficients impacts the mathematical work used by the students, where we again focus on two of them: by direct estimation from the graph or by calculating the equation of the line. On the other hand, feedback oriented towards the “estimation” strategy influences the mathematical work used by the students, even though a non-negligible group persists in the “calculating” strategy, which is partly explained by the perception of each of the strategies. |
Girard, A., Muneer, T., & Caceres, G. (2014). A validated design simulation tool for passive solar space heating: Results from a monitored house in West Lothian, Scotland. Indoor Built Environ., 23(3), 353–372.
Abstract: Determining the availability of renewable sources on a particular site would result in increasing the efficiency of buildings through appropriate design. The overall aim of the project is to develop a pioneering software tool allowing the assessment of possible energy sources for any building design project. The package would allow the user to simulate the efficiency of the Passive Solar Space Heating referred in the Low and Zero Carbon Energy Sources (LZCES) Strategic Guide stated by the Office of the Deputy Prime Minister (2006) and the Building Regulations. This research paper presents the tool for modelling the passive solar sources availability in relation to low-carbon building. A 3-month experimental set up monitoring a solar house in West Lothian, Scotland, was also undertaken to validate the simulation tool. Experimental and simulation results were found in good agreement following a one-to-one relationship demonstrating the ability of the newly developed tool to assess potential solar gain available for buildings. This modelling tool is highly valuable in consideration of the part L of the Building Regulations (updated in 2010).
|
Jerez, D. J., Jensen, H. A., Valdebenito, M. A., Misraji, M. A., Mayorga, F., & Beer, M. (2022). On the use of Directional Importance Sampling for reliability-based design and optimum design sensitivity of linear stochastic structures. Probabilistic Eng. Mech., 70, 103368.
Abstract: This contribution focuses on reliability-based design and optimum design sensitivity of linear dynamical structural systems subject to Gaussian excitation. Directional Importance Sampling (DIS) is implemented for reliability assessment, which allows to obtain first-order derivatives of the failure probabilities as a byproduct of the sampling process. Thus, gradient-based solution schemes can be adopted by virtue of this feature. In particular, a class of feasible-direction interior point algorithms are implemented to obtain optimum designs, while a direction-finding approach is considered to obtain optimum design sensitivity measures as a post -processing step of the optimization results. To show the usefulness of the approach, an example involving a building structure is studied. Overall, the reliability sensitivity analysis framework enabled by DIS provides a potentially useful tool to address a practical class of design optimization problems.
|