
Harrison, R., & Silva, F. (2019). A Game Theoretic Analysis Of Voluntary Euthanasia And Physician Assisted Suicide. Econ. Inq., to appear, 19 pp.
Abstract: In countries/states where voluntary euthanasia (VE) or physicianassisted suicide (PAS) is legal, the patient's decision about whether to request VE or PAS heavily relies on the information others provide. We use the tools of microeconomic theory to study how communication between the patient, his family and his physician influences the patient's decision. We argue that families have considerable power over the patient and that the amount of information that is transmitted from physician to patient might be severely diminished as a result of legalizing VE or PAS. We discuss our main results in the context of the ongoing normative debate over the legalization of VE and PAS. (JEL D8, I12)



Harrison, R., Hernandez, G., & Munoz, R. (2019). A discrete model of market interaction in the presence of social networks and price discrimination. Math. Soc. Sci., 102, 48–58.
Abstract: In this paper, we provide a model to study the equilibrium outcome in a market characterized by the competition between two firms offering horizontally differentiated services, in a context where consumers are the basic unit of decision on the demand side and are related through a social network. In the model, we consider that consumers make optimal choice, participation and consumption decisions, while firms optimally decide their tariffs, eventually using nonlinear pricing schemes. This approach permits us to identify and model two different kinds of network externalities, one associated with tariffmediated network externalities and the other related to participation network externalities. We apply the model to the telecommunication industry, where we study the impact of alternative regulatory interventions. We provide numerical evidence suggesting that policies designed to reduce horizontal differentiation might be more effective than those designed to limit access charges; this result seems robust to the presence of different forms of price discrimination. We should interpret these findings cautiously due to the existence of potential implementation costs for each policy. (C) 2019 Elsevier B.V. All rights reserved.



Henriquez, P. A., & Ruz, G. A. (2019). Noise reduction for nearinfrared spectroscopy data using extreme learning machines. Eng. Appl. Artif. Intell., 79, 13–22.
Abstract: The near infrared (NIR) spectra technique is an effective approach to predict chemical properties and it is typically applied in petrochemical, agricultural, medical, and environmental sectors. NIR spectra are usually of very high dimensions and contain huge amounts of information. Most of the information is irrelevant to the target problem and some is simply noise. Thus, it is not an easy task to discover the relationship between NIR spectra and the predictive variable. However, this kind of regression analysis is one of the main topics of machine learning. Thus machine learning techniques play a key role in NIR based analytical approaches. Preprocessing of NIR spectral data has become an integral part of chemometrics modeling. The objective of the preprocessing is to remove physical phenomena (noise) in the spectra in order to improve the regression or classification model. In this work, we propose to reduce the noise using extreme learning machines which have shown good predictive performances in regression applications as well as in large dataset classification tasks. For this, we use a novel algorithm called CPLELM, which has an architecture in parallel based on a nonlinear layer in parallel with another nonlinear layer. Using the soft margin loss function concept, we incorporate two Lagrange multipliers with the objective of including the noise of spectral data. Six reallife dataset were analyzed to illustrate the performance of the developed models. The results for regression and classification problems confirm the advantages of using the proposed method in terms of root mean square error and accuracy.



Hernandez, N., Fuentes, A., Reszka, P., & FernandezPello, A. C. (2019). Piloted ignition delay times on optically thin PMMA cylinders. Proc. Combust. Inst., 37(3), 3993–4000.
Abstract: The theory to predict ignition of solid fuels exposed to incident radiant heat fluxes has permitted to obtain simple correlations of the ignition delay time with the incident heat flux which are useful in practical engineering applications. However, the theory was developed under the assumption that radiation does not penetrate into the solid phase. In the case of semitransparent solids, where the penetration of radiation plays an important role in the heating and subsequent ignition of the fuel, the predictions of the classical ignition theory are not applicable. A new theory for the piloted ignition of optically thin cylindrical fuels has been developed. The theory uses an integral method and an approximation of the radiative transfer equation within the solid to predict the heating of an inert solid. An exact and an approximate analytical solution are obtained. The predictions are compared with piloted ignition experiments of clear PMMA cylinders. The results indicate that for opticallythin media, the heating and ignition are not sensible to the thermal conductivity of the solid, they are highly dependent on the indepth absorption coefficient. Using the approximate solution, the correlation 1/t(ig) proportional to (q)over dot(inc)'' was established. This correlation is adequate for engineering applications, and allows the estimation of effective properties of the solid fuel. The form of the correlation that was obtained is due to the integral method used in the solution of the heat equation, and does not imply that the semitransparent solid behaves like a thermally thin material. The approximate solution presented in this article constitutes a useful tool for pencilandpaper calculations and is an advancement in the understanding of solidphase ignition processes. (C) 2018 The Combustion Institute. Published by Elsevier Inc. All rights reserved.



Hernandez, R., & Venegas, O. (2019). Distortion Theorems Associated with Schwarzian Derivative for Harmonic Mappings. Complex Anal. Oper. Theory, 13(4), 1783–1793.
Abstract: Let f be a complexvalued harmonic mapping defined in the unit disc D. The theorems of Chuaqui and Osgood (J Lond Math Soc 2:289298, 1993), which assert that the bounds of the size of the hyperbolic norm of the Schwarzian derivative for an analytic function f imply certain bounds for distortion and growth of f, are extended to the harmonic case.



Hughes, S., Moreno, S., Yushimito, W. F., & HuertaCanepa, G. (2019). Evaluation of machine learning methodologies to predict stop delivery times from GPS data. Transp. Res. Pt. CEmerg. Technol., 109, 289–304.
Abstract: In last mile distribution, logistics companies typically arrange and plan their routes based on broad estimates of stop delivery times (i.e., the time spent at each stop to deliver goods to final receivers). If these estimates are not accurate, the level of service is degraded, as the promised time window may not be satisfied. The purpose of this work is to assess the feasibility of machine learning techniques to predict stop delivery times. This is done by testing a wide range of machine learning techniques (including different types of ensembles) to (1) predict the stop delivery time and (2) to determine whether the total stop delivery time will exceed a predefined time threshold (classification approach). For the assessment, all models are trained using information generated from GPS data collected in Medellin, Colombia and compared to hazard duration models. The results are threefold. First, the assessment shows that regressionbased machine learning approaches are not better than conventional hazard duration models concerning absolute errors of the prediction of the stop delivery times. Second, when the problem is addressed by a classification scheme in which the prediction is aimed to guide whether a stop time will exceed a predefined time, a basic Knearestneighbor model outperforms hazard duration models and other machine learning techniques both in accuracy and F1 score (harmonic mean between precision and recall). Third, the prediction of the exact duration can be improved by combining the classifiers and prediction models or hazard duration models in a two level scheme (first classification then prediction). However, the improvement depends largely on the correct classification (first level).



JaraMunoz, P., GuzmanFierro, V., Arriagada, C., Campos, V., Campos, J. L., GallardoRodriguez, J. J., et al. (2019). Low oxygen startup of partial nitrificationanammox process: mechanical or gas agitation? J. Chem. Technol. Biotechnol., 94(2), 475–483.
Abstract: BACKGROUND Partial nitrificationanammox (PNA) is a widely recognized technology to remove nitrogen from different types of wastewater. Low oxygen concentration is the most used strategy for PNA startup, but stability problems arise during the operation; thus, in the present study the effects of the type of agitation, oxygenation and shear stress on the sensitivity, energy consumption and performance were evaluated. Recognition of these parameters allows considered choice of the design of an industrial process for nitrogen abatement. RESULTS A mechanically agitated reactor (MAR) was compared to a stable, longterm operation period bubble column reactor (BCR), both started under low dissolved oxygen concentration conditions. MAR microbial assays confirmed the destruction of the nitrifying layer and an imbalance of the entire process when the oxygen to nitrogen loading ratio (O2:N) decreased by 25%. The granule sedimentation rate and specific anammox activity were 17% and 87% higher (respectively) in BCR. Economic analysis determined that the cost of aeration for the MAR and for the BCR were 23.8% and 1% of the total PNA energy consumption, respectively. CONCLUSIONS The BCR showed better results than the MAR. This study highlights the importance of type of agitation, oxygenation and shear stress for industrialscale PNA designs. (c) 2018 Society of Chemical Industry



Jarur, M. C., Dumais, J., & Rica, S. (2019). Limiting speed for jumping. C. R. Mec., 347(4), 305–317.
Abstract: General mechanical considerations provide an upper bound for the takeoff velocity of any jumper, animate or inanimate, rigid or soft body, animal or vegetal. The takeoff velocity is driven by the ratio of released energy to body mass. Further, the mean reaction force on a rigid platform during pushoff is inversely proportional to the characteristic size of the jumper. These general considerations are illustrated in the context of Alexander's jumper model, which can be solved exactly and which shows an excellent agreement with the mechanical results. (C) 2019 Academie des sciences. Published by Elsevier Masson SAS. All rights reserved.



Kossakowski, D., Espinoza, N., Brahm, R., Jordan, A., Henning, T., Rojas, F., et al. (2019). TOI150b and TOI163b: two transiting hot Jupiters, one eccentric and one inflated, revealed by TESS near and at the edge of the JWST CVZ. Mon. Not. Roy. Astron. Soc., 490(1), 1094–1110.
Abstract: We present the discovery of TYC91915191b (TOI150b, TIC 271893367) and HD271181b (TOI163b, TIC 179317684), two hot Jupiters initially detected using 30min cadence Transiting Exoplanet Survey Satellite (TESS) photometry from Sector 1 and thoroughly characterized through followup photometry (CHAT, Hazelwood, LCO/CTIO, El Sauce, TRAPPISTS), highresolution spectroscopy (FEROS, CORALIE), and speckle imaging (Gemini/DSSI), confirming the planetary nature of the two signals. A simultaneous joint fit of photometry and radial velocity using a new fitting package JULIET reveals that TOI150b is a 1.254 +/ 0.016 RJ, massive (2.61(0.12)(+0.19) MJ) hot Jupiter in a 5.857d orbit, while TOI163b is an inflated (RP = 1.478(0.029)(+0.022) RJ, MP = 1.219 +/ 0.11 MJ) hot Jupiter on a P = 4.231d orbit; both planets orbit Ftype stars. A particularly interesting result is that TOI150b shows an eccentric orbit (e = 0.262(0.037)(+0.045)), which is quite uncommon among hot Jupiters. We estimate that this is consistent, however, with the circularization timescale, which is slightly larger than the age of the system. These two hot Jupiters are both prime candidates for further characterization – in particular, both are excellent candidates for determining spinorbit alignments via the RossiterMcLaughlin (RM) effect and for characterizing atmospheric thermal structures using secondary eclipse observations considering they are both located closely to the James Webb Space Telescope (JWST) Continuous Viewing Zone (CVZ).



Lagos, R., Canessa, E., & Chaigneau, S. E. (2019). Modeling stereotypes and negative selfstereotypes as a function of interactions among groups with power asymmetries. J. Theory Soc. Behav., 49(3), 312–333.
Abstract: Stereotypes is one of the most researched topics in social psychology. Within this context, negative selfstereotypes pose a particular challenge for theories. In the current work, we propose a model that suggests that negative selfstereotypes can theoretically be accounted for by the need to communicate in a social system made up by groups with unequal power. Because our theory is dynamic, probabilistic, and interactionist, we use a computational simulation technique to show that the proposed model is able to reproduce the phenomenon of interest, to provide novel accounts of related phenomena, and to suggest novel empirical predictions. We describe our computational model, our variables' dynamic behavior and interactions, and link our analyses to the literature on stereotypes and selfstereotypes, the stability of stereotypes (in particular, gender and racial stereotypes), the effects of power asymmetries, and the effects of intergroup contact.



Liedloff, M., Montealegre, P., & Todinca, I. (2019). Beyond Classes of Graphs with “Few” Minimal Separators: FPT Results Through Potential Maximal Cliques. Algorithmica, 81(3), 986–1005.
Abstract: Let P(G,X) be a property associating a boolean value to each pair (G,X) where G is a graph and X is a vertex subset. Assume that P is expressible in counting monadic second order logic (CMSO) and let t be an integer constant. We consider the following optimization problem: given an input graph G=(V,E), find subsets XFV such that the treewidth of G[F] is at most t, property P(G[F],X) is true and X is of maximum size under these conditions. The problem generalizes many classical algorithmic questions, e.g., Longest Induced Path, Maximum Induced Forest, IndependentHPacking, etc. Fomin et al. (SIAM J Comput 44(1):5487, 2015) proved that the problem is polynomial on the class of graph Gpoly, i.e. the graphs having at most poly(n) minimal separators for some polynomial poly. Here we consider the class Gpoly+kv, formed by graphs of Gpoly to which we add a set of at most k vertices with arbitrary adjacencies, called modulator. We prove that the generic optimization problem is fixed parameter tractable on Gpoly+kv, with parameter k, if the modulator is also part of the input.



Lopez, D., Leiva, A. M., Arismendi, W., & Vidal, G. (2019). Influence of design and operational parameters on the pathogens reduction in constructed wetland under the climate change scenario. Rev. Environ. Sci. BioTechnol., 18(1), 101–125.
Abstract: Under the climate change scenario, constructed wetlands (CWs) as an engineered system for treating domestic wastewater will face different challenges. Some of them are: (a) the increase of pathogens concentration in wastewater due to the rise of global temperature; (b) higher precipitation that can cause an increase of pathogens due to runoff; (c) the reuse of treated wastewater related to the water scarcity. These problems can affect the capacity of CWs for removal pathogens. In this context, the objective of this review is to provide an overview of the influence of design and operational parameters on pathogens reduction in CWs. To accomplish with this purpose, the published information (>30 studies) about the reduction of pathogens and the operational and design parameters in different CWs configurations and were gathered. With this data, statistical analyses were performed considering the most relevant variables which significantly influence the removal of pathogens in CWs. For this, principal component analyses (PCA) were achieved for determining, separately, the correlation of operational parameters with fecal coliform (FC) and total coliform (TC) removal. The results of PCA showed that FC and TC were correlated positively with mass removal rates of chemical oxygen demand (COD) and biological oxygen Demand (BOD5), total suspended solids (TSS) removal and the size of support medium. This study is the first approach that analyzes together the design and operational parameters which influence the pathogen removal in CWs. For this reason, these parameters and the increase on microorganism concentrations due to the climate change have to be considered for the future design of CWs.



MacLean, S., MontalvaMedel, M., & Goles, E. (2019). Block invariance and reversibility of one dimensional linear cellular automata. Adv. Appl. Math., 105, 83–101.
Abstract: Consider a onedimensional, binary cellular automaton f (the CA rule), where its n nodes are updated according to a deterministic block update (blocks that group all the nodes and such that its order is given by the order of the blocks from left to right and nodes inside a block are updated synchronously). A CA rule is block invariant over a family F of block updates if its set of periodic points does not change, whatever the block update of F is considered. In this work, we study the block invariance of linear CA rules by means of the property of reversibility of the automaton because such a property implies that every configuration has a unique predecessor, so, it is periodic. Specifically, we extend the study of reversibility done for the Wolfram elementary CA rules 90 and 150 as well as, we analyze the reversibility of linear rules with neighbourhood radius 2 by using matrix algebra techniques. (C) 2019 Elsevier Inc. All rights reserved.



Markou, G., & Genco, F. (2019). Seismic assessment of small modular reactors: NuScale case study for the 8.8 Mw earthquake in Chile. Nucl. Eng. Des., 342, 176–204.
Abstract: Reducing greenhouse gas emissions and improving energy production sustainability is a paramount of Chile's 2050 energy policy. This though, is difficult to achieve without some degree of nuclear power involvement, given that the geography of the country consists of many areas that are practically offgrid, whereas cannot be developed and financially exploited due to the lack of basic commodities such as water and electricity. Recently small modular reactors (SMRs) have gained lots of attention by both researchers and world policy makers for their promised capabilities of enhanced safety systems, affordable costs and competitive scalability. SMRs can be located in remote areas and at this time are being actively developed in Argentina, USA, Brazil, Russia, China, South Korea, Japan, India and South Africa. Chile's 2010 earthquake and Fukushima's 2011 nuclear disaster have increased significantly both the population's fear and opposition to Nuclear Power Energy for the possible consequences of radiation on the lives of people. This paper aims to study the seismic resistance of a typical nuclear structure, being at time proposed in Small Modular Reactors, by using earthquake conditions typically seen in Chile. Since many designs are under study, a NuScale reactor from USA is analyzed under these extreme loading conditions. The major advantages of the NuScale reactor are in the power scalability (it can go from 1 to 12 reactor cores producing from 60 to 720 MWe), limited nuclear fuel concentration, modules allocated below grade and high strength steel containments fully immersed in water. The cooling effect beyond Design Basis Accident is ensured indefinitely, which induces a significant safety factor in the case of an accident. For the purpose of this study a detailed 3D detailed structural model was developed, reproducing the NuScale reactor's reinforced concrete framing system, where nonlinear analyses was performed to assess the overall mechanical response of the structure. The framing system has been tested under high seismic excitations typically seen in Chile (Mw > 8.0), showing high resistance and capability to cope with the developed forces due to its design. Based on a SoilStructure Interaction analysis, it was also found that the NuScale framing system manages to maintain a lowstress level at the interaction surface between the foundation and the soil, where the structural system was found to be able to withstand significant earthquake loads. Finally, further investigation is deemed necessary in order to study the potential damages of the structure in the case of other hazards such as tsunami events, blast loads, etc.



Matus, O., Barrera, J., Moreno, E., & Rubino, G. (2019). On the MarshallOlkin Copula Model for Network Reliability Under Dependent Failures. IEEE Trans. Reliab., 68(2), 451–461.
Abstract: The MarshallOlkin (MO) copulamodel has emerged as the standard tool for capturing dependence between components in failure analysis in reliability. In this model, shocks arise at exponential random times, that affect one or several components inducing a natural correlation in the failure process. However, because the number of parameter of the model grows exponentially with the number of components, MO suffers of the “curse of dimensionality.” MO models are usually intended to be applied to design a network before its construction; therefore, it is natural to assume that only partial information about failure behavior can be gathered, mostly from similar existing networks. To construct such an MO model, we propose an optimization approach to define the shock's parameters in the MO copula, in order to match marginal failures probabilities and correlations between these failures. To deal with the exponential number of parameters of this problem, we use a columngeneration technique. We also discuss additional criteria that can be incorporated to obtain a suitable model. Our computational experiments show that the resulting MO model produces a close estimation of the network reliability, especially when the correlation between component failures is significant.



Mondschein, S., Yankovic, N., & Matus, O. (2019). Agedependent optimal policies for hepatitis C virus treatment. Int. Trans. Oper. Res., to appear, 27 pp.
Abstract: In recent years, highly effective treatments for hepatitis C virus (HCV) have become available. However, high prices of new treatments call for a careful policy evaluation when considering economic constraints. Although the current medical advice is to administer the new therapies to all patients, economic and capacity constraints require an efficient allocation of these scarce resources. We use stochastic dynamic programming to determine the optimal policy for prescribing the new treatment based on the age and disease progression of the patient. We show that, in a simplified version of the model, new drugs should be administered to patients at a given level of fibrosis if they are within prespecified age limits; otherwise, a conservative approach of closely monitoring the evolution of the patient should be followed. We use a cohort of Spanish patients to study the optimal policy regarding costs and health indicators. For this purpose, we compare the performance of the optimal policy against a liberal policy of treating all sick patients. In this analysis, we achieve similar results in terms of the number of transplants, HCVrelated deaths, and quality of adjusted life years, with a significant reduction in overall expenditure. Furthermore, the budget required during the first year of implementation when using the proposed methodology is only 12% of that when administering the treatment to all patients at once. Finally, we propose a method to prioritize patients when there is a shortage (surplus) in the annual budget constraint and, therefore, some recommended treatments must be postponed (added).



Nasirov, S., Cruz, E., Agostini, C. A., & Silva, C. (2019). Policy Makers' Perspectives on the Expansion of Renewable Energy Sources in Chile's Electricity Auctions. Energies, 12(21), 17 pp.
Abstract: Chile has become one of the first few countries where renewable sources compete directly with conventional generation in pricebased auctions. Moreover, the results of energy auctions during the last few years show a remarkable transition from conventional fossil fuels to renewable energies. In fact, the energy auction in 2017, to provide energy to customers from distribution companies, achieved a massive expansion in renewable technology at one of the lowest prices in the world. These positive results prompted the question if such results were permanent or temporal due to factors with limited effects. In this regard, this paper studies the key factors that drove the significant rise of renewable technologies in Chilean energy auctions, obtaining valuable lessons for regulators, not only in Chile, but also in the region and the world. For this purpose, we considered a wellproven method based on a hybrid multicriteria decisionmaking model to examine and prioritize the main drivers of the expansion of renewables in auctions. The results showed that some specific characteristics of the auction design, particularly the hourly supply blocks, the lead time for project construction, and contract duration, were the most significant drivers for the expansion of renewables in energy auctions. Moreover, the results showed that, provided that the auction design accommodates for such drivers, solar energy ends up as the most attractive technology in the Chilean auctions. The research also shows the main findings are robust by the application of a probabilistic sensitivity analysis.



Navarro, H., Marco, L. M., Araneda, A. A., & Bennun, L. (2019). Spatial distribution of Si in Pinus Insigne (Pinus radiata) Wood using micro XRF by Synchrotron Radiation. J. Wood Chem. Technol., 39(3), 187–198.
Abstract: Silicon, while not an essential element, is known to have positive roles in certain vegetable species. For instance, it has been recognized to protect them from biotic and abiotic stress. Due to the fact that certain species accumulate the aforementioned element in their tissues, the determination of its concentration is of importance in different disciplines, such as dendrology, plant physiology, forest management, agroecology, and also in the wood industry. Usually, its quantification is preceded by a series of digestion steps that, aside from been timeconsuming, and contaminationprone, prevents from conducting a spatial distribution of the element on the sample. In this research, samples of Pinus radiata wood were studied using a synchrotron radiation source that allowed direct scanning of its surface without any treatment, and the determination of silicon as a function of the position and the tree rings, using micro Xray fluorescence (mu XRF). A quantification method based in the fundamental parameters approach was evaluated. It was found that silicon concentration increases near the latewood ring zones, showing a periodical behavior, related to seasonal environmental events.



O' Ryan, R., Benavides, C., Diaz, M., San Martin, J. P., & Mallea, J. (2019). Using probabilistic analysis to improve greenhouse gas baseline forecasts in developing country contexts: the case of Chile. Clim. Policy, 19(3), 299–314.
Abstract: In this paper, initial steps are presented toward characterizing, quantifying, incorporating and communicating uncertainty applying a probabilistic analysis to countrywide emission baseline forecasts, using Chile as a case study. Most GHG emission forecasts used by regulators are based on bottomup deterministic approaches. Uncertainty is usually incorporated through sensitivity analysis and/or use of different scenarios. However, much of the available information on uncertainty is not systematically included. The deterministic approach also gives a wide range of variation in values without a clear sense of probability of the expected emissions, making it difficult to establish both the mitigation contributions and the subsequent policy prescriptions for the future. To improve on this practice, we have systematically included uncertainty into a bottomup approach, incorporating it in key variables that affect expected GHG emissions, using readily available information, and establishing expected baseline emissions trajectories rather than scenarios. The resulting emission trajectories make explicit the probability percentiles, reflecting uncertainties as well as possible using readily available information in a manner that is relevant to the decision making process. Additionally, for the case of Chile, contradictory deterministic results are eliminated, and it is shown that, whereas under a deterministic approach Chile's mitigation ambition does not seem high, the probabilistic approach suggests this is not necessarily the case. It is concluded that using a probabilistic approach allows a better characterization of uncertainty using existing data and modelling capacities that are usually weak in developing country contexts. Key policy insights Probabilistic analysis allows incorporating uncertainty systematically into key variables for baseline greenhouse gas emission scenario projections. By using probabilistic analysis, the policymaker can be better informed as to future emission trajectories. Probabilistic analysis can be done with readily available data and expertise, using the usual models preferred by policymakers, even in developing country contexts.



Osorio, H., Nasirov, S., Agostini, C. A., & Silva, C. (2019). Assessing the economic viability of energy storage systems in the Chilean electricity system: An empirical analysis from arbitrage revenue perspectives. J. Renew. Sustain. Energy, 11(1), 015901.
Abstract: The emergence of high penetration rates of renewable energies in power systems presents a serious challenge in energy generation and load balance maintenance to ensure power network stability and reliability. Energy Storage Systems (ESSs) could play a relevant role in facing these challenges, as the technologies have passed the demo and prototype phases to a wide market implementation phase. The only remaining barrier for their implementation is their cost, but even this barrier is quickly disappearing. In this paper, we address the financial feasibility of storage technologies in electricity systems. In particular, we evaluate whether such technologies are economically sustainable and how far they are from becoming viable. For this purpose, we consider the Chilean electricity system and evaluate the maximum possible arbitrage revenues that could be achieved under ESS through benefiting from energy time shift, diminishing of transmission losses, and transmission upgrade deferral. The results show that the arbitrage revenues are still below the cost of storage systems. Further improvement in storage efficiency or a decrease in the cost of storage systems is still needed to make this type of investment financially viable in the near future.

