Home | << 1 2 >> |
Asenjo, F. A., & Hojman, S. A. (2022). Airy heat bullets. Eur. Phys. J. Plus., 137(10), 1201.
Abstract: New localized structured solutions for the three-dimensional linear heat (diffusion) equation are presented. These new solutions are written in terms of Airy functions. They are constructed as wave packet-like structures formed by a superposition of Bessel functions through the introduction of spectral functions. These diffusive solutions accelerate along their propagation direction, while in the plane orthogonal to it, they retain their confined structure. These heat (diffusion) densities retain a complete localized form in space as they propagate, and may be considered the heat analogue of Airy light bullets.
Keywords: GAUSSIAN LIGHT BULLETS; WAVES; BEAMS; GENERATION
|
Barroso, L., Munoz, F. D., Bezerra, B., Rudnick, H., & Cunha, G. (2021). Zero-Marginal-Cost Electricity Market Designs: Lessons Learned From Hydro Systems in Latin America Might Be Applicable for Decarbonization. IEEE Power Energy Mag., 19(1), 64–73.
Abstract: Large reductions in the cost of renewable energy technologies, particularly wind and solar, as well as various instruments used to achieve decarbonization targets (e.g., renewable mandates, renewable auctions, subsidies, and carbon pricing mechanisms) are driving the rapid growth of investments in these generation technologies worldwide.
|
Bergen, M., & Munoz, F. D. (2018). Quantifying the effects of uncertain climate and environmental policies on investments and carbon emissions: A case study of Chile. Energy Econ., 75, 261–273.
Abstract: In this article we quantify the effect of uncertainty of climate and environmental policies on transmission and generation investments, as well as on CO2 emissions in Chile. We use a two-stage stochastic planning model with recourse or corrective investment options to find optimal portfolios of infrastructure both under perfect information and uncertainty. Under a series of assumptions, this model is equivalent to the equilibrium of a much more complicated bi-level market model, where a transmission planner chooses investments first and generation firms invest afterwards. We find that optimal investment strategies present important differences depending on the policy scenario. By changing our assumption of how agents will react to this uncertainty we compute bounds on the cost that this uncertainty imposes on the system, which we estimate ranges between 3.2% and 5.7% of the minimum expected system cost of $57.6B depending on whether agents will consider or not uncertainty when choosing investments. We also find that, if agents choose investments using a stochastic planning model, uncertain climate policies can result in nearly 18% more CO2 emissions than the equilibrium levels observed under perfect information. Our results highlight the importance of credible and stable long-term regulations for investors in the electric power industry if the goal is to achieve climate and environmental targets in the most cost-effective manner and to minimize the risk of asset stranding. (C) 2018 Elsevier B.V. All rights reserved.
|
Cáeres, C., Heusser, B., Garnham, A., & Moczko, E. (2023). The Major Hypotheses of Alzheimer's Disease: Related Nanotechnology-Based Approaches for Its Diagnosis and Treatment. Cells, 12(23), 2669.
Abstract: Alzheimer's disease (AD) is a well-known chronic neurodegenerative disorder that leads to the progressive death of brain cells, resulting in memory loss and the loss of other critical body functions. In March 2019, one of the major pharmaceutical companies and its partners announced that currently, there is no drug to cure AD, and all clinical trials of the new ones have been cancelled, leaving many people without hope. However, despite the clear message and startling reality, the research continued. Finally, in the last two years, the Food and Drug Administration (FDA) approved the first-ever medications to treat Alzheimer's, aducanumab and lecanemab. Despite researchers' support of this decision, there are serious concerns about their effectiveness and safety. The validation of aducanumab by the Centers for Medicare and Medicaid Services is still pending, and lecanemab was authorized without considering data from the phase III trials. Furthermore, numerous reports suggest that patients have died when undergoing extended treatment. While there is evidence that aducanumab and lecanemab may provide some relief to those suffering from AD, their impact remains a topic of ongoing research and debate within the medical community. The fact is that even though there are considerable efforts regarding pharmacological treatment, no definitive cure for AD has been found yet. Nevertheless, it is strongly believed that modern nanotechnology holds promising solutions and effective clinical strategies for the development of diagnostic tools and treatments for AD. This review summarizes the major hallmarks of AD, its etiological mechanisms, and challenges. It explores existing diagnostic and therapeutic methods and the potential of nanotechnology-based approaches for recognizing and monitoring patients at risk of irreversible neuronal degeneration. Overall, it provides a broad overview for those interested in the evolving areas of clinical neuroscience, AD, and related nanotechnology. With further research and development, nanotechnology-based approaches may offer new solutions and hope for millions of people affected by this devastating disease.
|
Gacitua, M. A., Gonzalez, B., Majone, M., & Aulenta, F. (2014). Boosting the electrocatalytic activity of Desulfovibrio paquesii biocathodes with magnetite nanoparticles. Int. J. Hydrog. Energy, 39(27), 14540–14545.
Abstract: The production of reduced value-added chemicals and fuels using microorganisms as cheap cathodic electrocatalysts is recently attracting considerable attention. A robust and sustainable production is, however, still greatly hampered by a poor understanding of electron transfer mechanisms to microorganisms and the lack of strategies to improve and manipulate thereof. Here, we investigated the use of electrically-conductive magnetite (Fe3O4) nanoparticles to improve the electrocatalytic activity of a H-2-producing Desulfovibrio paquesii biocathode. Microbial biocathodes supplemented with a suspension of nanoparticles displayed increased H-2 production rates and enhanced stability compared to unamended ones. Cyclic voltammetry confirmed that Faradaic currents involved in microbially-catalyzed H-2 evolution were enhanced by the addition of the nanoparticles. Possibly, nanoparticles improve the extracellular electron path to the microorganisms by creating composite networks comprising of mineral particles and microbial cells. Copyright (C) 2014, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
|
Gregor, C., Ashlock, D., Ruz, G. A., MacKinnon, D., & Kribs, D. (2022). A novel linear representation for evolving matrices. Soft Comput., 26(14), 6645–6657.
Abstract: A number of problems from specifiers for Boolean networks to programs for quantum computers can be encoded as matrices. The paper presents a novel family of linear, generative representations for evolving matrices. The matrices can be general or restricted within special classes of matrices like permutation matrices, Hermitian matrices, or other groups of matrices with particular algebraic properties. These classes include unitary matrices which encode quantum programs. This representation avoids the brittleness that arises in direct representations of matrices and permits the researcher substantial control of the part of matrix space being searched. The representation is demonstrated on a relatively simple matrix problem in automatic content generation as well as Boolean map induction and automatic quantum programming. The automatic content generation problem yields interesting results; the generative matrix representation yields worse fitness but a substantially greater variety of outcomes than a direct encoding, which is acceptable when generating content. The Boolean map experiments extend and confirm results that demonstrate that the generative encoding is superior to a direct encoding for the transition matrix of a Boolean map. The quantum programming results are generally quite good, with poor performance on the simplest problems in two of the families of programming tasks studied. The viability of the new representation for evolutionary matrix induction is well supported.
|
Guevara, E., Babonneau, F., Homem-de-Mello, T., & Moret, S. (2020). A machine learning and distributionally robust optimization framework for strategic energy planning under uncertainty. Appl. Energy, 271, 18 pp.
Abstract: This paper investigates how the choice of stochastic approaches and distribution assumptions impacts strategic investment decisions in energy planning problems. We formulate a two-stage stochastic programming model assuming different distributions for the input parameters and show that there is significant discrepancy among the associated stochastic solutions and other robust solutions published in the literature. To remedy this sensitivity issue, we propose a combined machine learning and distributionally robust optimization (DRO) approach which produces more robust and stable strategic investment decisions with respect to uncertainty assumptions. DRO is applied to deal with ambiguous probability distributions and Machine Learning is used to restrict the DRO model to a subset of important uncertain parameters ensuring computational tractability. Finally, we perform an out-of-sample simulation process to evaluate solutions performances. The Swiss energy system is used as a case study all along the paper to validate the approach.
|
Han, Z. Y., Chen, H., He, C. L., Dodbiba, G., Otsuki, A., Wei, Y. Z., et al. (2023). Nanobubble size distribution measurement by interactive force apparatus under an electric field. Sci. Rep., 13(1), 3663.
Abstract: Nanobubbles have been applied in many fields, such as environmental cleaning, material production, agriculture, and medicine. However, the measured nanobubble sizes differed among the measurement methods, such as dynamic light scattering, particle trajectory, and resonance mass methods. Additionally, the measurement methods were limited with respect to the bubble concentration, refractive index of liquid, and liquid color. Here, a novel interactive force measurement method for bulk nanobubble size measurement was developed by measuring the force between two electrodes filled with bulk nanobubble-containing liquid under an electric field when the electrode distance was changed in the nm scale with piezoelectric equipment. The nanobubble size was measured with a bubble gas diameter and also an effective water thin film layer covered with a gas bubble that was estimated to be approximately 10 nm based on the difference between the median diameter of the particle trajectory method and this method. This method could also be applied to the solid particle size distribution measurement in a solution.
|
Hojmann, S. A., & Asenjo, F. A. (2020). Quantum particles that behave as free classical particles. Phys. Rev. A, 102(5), 052211.
Abstract: The existence of nonvanishing Bohm potentials, in the Madelung-Bohm version of the Schrödinger equation, allows for the construction of particular solutions for states of quantum particles interacting with nontrivial external potentials that propagate as free classical particles. Such solutions are constructed with phases which satisfy the classical Hamilton-Jacobi for free particles and whose probability densities propagate with constant velocity, as free classical particles do.
Keywords: Wave; Generation
|
Inzunza, A., Munoz, F. D., & Moreno, R. (2021). Measuring the effects of environmental policies on electricity markets risk. Energy Econ., 102, 105470.
Abstract: This paper studies how environmental policies, such as renewable portfolio standards (RPS) and carbon taxes, might contribute to reducing risk exposure in the electricity generation sector. We illustrate this effect by first computing long-term market equilibria of the Chilean generation sector for the year 2035 using a risk-averse planning model, considering uncertainty of hydrological scenarios and fossil fuel prices as well as distinct levels of risk aversion, but assuming no environmental policies in place. We then compare these risk-averse equilibria to generation portfolios obtained by imposing several levels of RPS and carbon taxes in a market with risk-neutral firms, separately. Our results show that the implementation of both policies can provide incentives for investments in portfolios of generation technologies that limit the risk exposure of the system, particularly when high levels of RPS (35%) or high carbon taxes (35 $/tonCO2) are applied. However, we find that in the case of a hydrothermal system, the resulting market equilibria under RPS policies yield expected generation cost and risk levels (i.e. standard deviation of costs) that are more similar to the efficient portfolios determined using a risk-averse planning model than the ones we find under the carbon tax.
|
Lagos, F., Klapp, M. A., & Toriello, A. (2023). Branch-and-price for routing with probabilistic customers. Comput. Ind. Eng., 183, 109429.
Abstract: We study the Vehicle Routing Problem with Probabilistic Customers (VRP-PC), a two-stage optimization model, which is a fundamental building block within the broad family of stochastic routing problems. This problem is mainly motivated by logistics distribution networks in which customers receive frequent delivery services, and by the last mile problem faced by companies such as UPS and FedEx. In a first stage before customer service requests realize, a dispatcher determines a set of vehicle routes serving all potential customer locations. In a second stage occurring after observing all customer requests, vehicles execute planned routes skipping all locations of customers not requiring service. The objective is to minimize the expected vehicle travel cost assuming known customer realization probabilities. We propose a column generation framework to solve the VRP-PC to a given optimality tolerance. Specifically, we present two novel algorithms, one that under -approximates a solution's expected cost, and another that uses its exact expected cost. Each algorithm is equipped with a route pricing mechanism that iteratively improves the approximation precision of a route's reduced cost; this produces fast route insertions at the start of the algorithm and reaches termination conditions at the end of the execution. Compared to branch-and-cut algorithms for arc-based formulations, our framework can more readily incorporate sequence-dependent constraints, which are typically required in routing problems. We provide a priori and a posteriori performance guarantees for these algorithms, and demonstrate their effectiveness via a computational study on instances with realization probabilities ranging from 0.5 to 0.9.
Keywords: Vehicle routing; Probabilistic routing; Column generation
|
Letelier, O. R., Espinoza, D., Goycoolea, M., Moreno, E., & Munoz, G. (2020). Production Scheduling for Strategic Open Pit Mine Planning: A Mixed-Integer Programming Approach. Oper. Res., 68(5), 1425–1444.
Abstract: Given a discretized representation of an ore body known as a block model, the open pit mining production scheduling problem that we consider consists of defining which blocks to extract, when to extract them, and how or whether to process them, in such a way as to comply with operational constraints and maximize net present value. Although it has been established that this problem can be modeled with mixed-integer programming, the number of blocks used to represent real-world mines (millions) has made solving large instances nearly impossible in practice. In this article, we introduce a new methodology for tackling this problem and conduct computational tests using real problem sets ranging in size from 20,000 to 5,000,000 blocks and spanning 20 to 50 time periods. We consider both direct block scheduling and bench-phase scheduling problems, with capacity, blending, and minimum production constraints. Using new preprocessing and cutting planes techniques, we are able to reduce the linear programming relaxation value by up to 33%, depending on the instance. Then, using new heuristics, we are able to compute feasible solutions with an average gap of 1.52% relative to the previously computed bound. Moreover, after four hours of running a customized branch-and-bound algorithm on the problems with larger gaps, we are able to further reduce the average from 1.52% to 0.71%.
|
Millan, C., Vivanco, J. F., Benjumeda-Wijnhoven, I. M., Bjelica, S., & Santibanez, J. F. (2018). Mesenchymal Stem Cells and Calcium Phosphate Bioceramics: Implications in Periodontal Bone Regeneration. Adv.Exp.Med.Biol., 1107, 91–112.
Abstract: In orthopedic medicine, a feasible reconstruction of bone structures remains one of the main challenges both for healthcare and for improvement of patients' quality of life. There is a growing interest in mesenchymal stem cells (MSCs) medical application, due to their multilineage differentiation potential, and tissue engineering integration to improve bone repair and regeneration. In this review we will describe the main characteristics of MSCs, such as osteogenesis, immunomodulation and antibacterial properties, key parameters to consider during bone repair strategies. Moreover, we describe the properties of calciumphosphate (CaP) bioceramics, which demonstrate to be useful tools in combination with MSCs, due to their biocompatibility, osseointegration and osteoconduction for bone repair and regeneration. Also, we overview the main characteristics of dental cavity MSCs, which are promising candidates, in combination with CaP bioceramics, for bone regeneration and tissue engineering. The understanding of MSCs biology and their interaction with CaP bioceramics and other biomaterials is critical for orthopedic surgical bone replacement, reconstruction and regeneration, which is an integrative and dynamic medical, scientific and bioengineering field of research and biotechnology.
|
Moreno, S., Neville, J., & Kirshner, S. (2018). Tied Kronecker Product Graph Models to Capture Variance in Network Populations. ACM Trans. Knowl. Discov. Data, 12(3), 40 pp.
Abstract: Much of the past work on mining and modeling networks has focused on understanding the observed propel ties of single example graphs. However, in many real-life applications it is important to characterize the structure of populations of graphs. In this work, we analyze the distributional properties of probabilistic generative graph models (PGGMs) for network populations. PGGMs are statistical methods that model the network distribution and match common characteristics of real-world networks. Specifically, we show that most PGGMs cannot relied the natural variability in graph properties observed across multiple networks because their edge generation process assumes independence among edges. Then, we propose the mixed Kronecker Product Graph Model (mKPGM) a scalable generalization of KPGMs that uses tied parameters to increase the variability of the sampled networks, while preserving the edge probabilities in expectation. We compare mKPGM to several other graph models. The results show that learned mKPGMs accurately represent the characteristics of real-world networks, while also effectively capturing the natural variability in network structure.
|
Moreno, S., Pfeiffer, J. J., & Neville, J. (2018). Scalable and exact sampling method for probabilistic generative graph models. Data Min. Knowl. Discov., 32(6), 1561–1596.
Abstract: Interest in modeling complex networks has fueled the development of multiple probabilistic generative graph models (PGGMs). PGGMs are statistical methods that model the network distribution and match common characteristics of real world networks. Recently, scalable sampling algorithms for well known PGGMs, made the analysis of large-scale, sparse networks feasible for the first time. However, it has been demonstrated that these scalable sampling algorithms do not sample from the original underlying distribution, and sometimes produce very unlikely graphs. To address this, we extend the algorithm proposed in Moreno et al.(in: IEEE 14th international conference on data mining, pp 440-449, 2014) for a single model and develop a general solution for a broad class of PGGMs. Our approach exploits the fact that PGGMs are typically parameterized by a small set of unique probability valuesthis enables fast generation via independent sampling of groups of edges with the same probability value. By sampling within groups, we remove bias due to conditional sampling and probability reallocation. We show that our grouped sampling methods are both provably correct and efficient. Our new algorithm reduces time complexity by avoiding the expensive rejection sampling step previously necessary, and we demonstrate its generality, by outlining implementations for six different PGGMs. We conduct theoretical analysis and empirical evaluation to demonstrate the strengths of our algorithms. We conclude by sampling a network with over a billion edges in 95s on a single processor.
|
Munoz, F. D., & Mills, A. D. (2015). Endogenous Assessment of the Capacity Value of Solar PV in Generation Investment Planning Studies. IEEE Trans. Sustain. Energy, 6(4), 1574–1585.
Abstract: There exist several different reliability-and approximation-based methods to determine the contribution of solar resources toward resource adequacy. However, most of these approaches require knowing in advance the installed capacities of both conventional and solar generators. This is a complication since generator capacities are actually decision variables in capacity planning studies. In this paper, we study the effect of time resolution and solar PV penetration using a planning model that accounts for the full distribution of generator outages and solar resource variability. We also describe a modification of a standard deterministic planning model that enforces a resource adequacy target through a reserve margin constraint. Our numerical experiments show that at least 50 days worth of data are necessary to approximate the results of the full-resolution model with a maximum error of 2.5% on costs and capacity. We also show that the amount of displaced capacity of conventional generation decreases rapidly as the penetration of solar PV increases. We find that using an exogenously defined and constant capacity value based on time-series data can yield relatively accurate results for small penetration levels. For higher penetration levels, the modified deterministic planning model better captures avoided costs and the decreasing value of solar PV.
|
Munoz, F. D., van der Weijde, A. H., Hobbs, B. F., & Watson, J. P. (2017). Does risk aversion affect transmission and generation planning? A Western North America case study. Energy Econ., 64, 213–225.
Abstract: We investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. This model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, in which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models. (C) 2017 Elsevier B.V. All rights reserved.
|
Munoz, G., Espinoza, D., Goycoolea, M., Moreno, E., Queyranne, M., & Rivera Letelier, O. (2018). A study of the Bienstock-Zuckerberg algorithm: applications in mining and resource constrained project scheduling. Comput. Optim. Appl., 69(2), 501–534.
Abstract: We study a Lagrangian decomposition algorithm recently proposed by Dan Bienstock and Mark Zuckerberg for solving the LP relaxation of a class of open pit mine project scheduling problems. In this study we show that the Bienstock-Zuckerberg (BZ) algorithm can be used to solve LP relaxations corresponding to a much broader class of scheduling problems, including the well-known Resource Constrained Project Scheduling Problem (RCPSP), and multi-modal variants of the RCPSP that consider batch processing of jobs. We present a new, intuitive proof of correctness for the BZ algorithm that works by casting the BZ algorithm as a column generation algorithm. This analysis allows us to draw parallels with the well-known Dantzig-Wolfe decomposition (DW) algorithm. We discuss practical computational techniques for speeding up the performance of the BZ and DW algorithms on project scheduling problems. Finally, we present computational experiments independently testing the effectiveness of the BZ and DW algorithms on different sets of publicly available test instances. Our computational experiments confirm that the BZ algorithm significantly outperforms the DW algorithm for the problems considered. Our computational experiments also show that the proposed speed-up techniques can have a significant impact on the solve time. We provide some insights on what might be explaining this significant difference in performance.
Keywords: Column generation; Dantzig-Wolfe; Optimization; RCPSP
|
Nasirov, S., Ciarreta, A., Agostini, C. A., & Gutiérrez-Hita, C. (2024). Distributed solar PV applications. In Frontiers in Energy Research (Vol. 12, 1367587). |
Prieto, P., Briede, J. C., Beghelli, A., Canessa, E., & Barra, C. (2020). I like it elegant: imprinting personalities into product shapes. Int. J. Des. Creat. Innov., 8(1), 5–20.
Abstract: The ability of designing personality-based products is key for their successful launch in the market, since customers prefer products that have a similar personality to their own. However, the creative process designers use to imprint a given personality into a product is still a ?black-box? and lengthy process that requires expertise and successive customer validation. The research challenge here is how to systematize the creative process of defining new geometries of a product with an intended personality. Due to the complexity of this challenge, the focus of this paper is solely about the process of defining the shape of a product with a given personality. A 5-step systematic method to extract and define the key form aspects of a specific personality is defined to do this. The use of the method is exemplified by developing a shape which is representative of an elegant personality and its suitability is tested using questionnaires answered by both design and non-design trained people. Results show that customers better recognize the personality imprinted on the object when the steps of the proposed method are fully complied with. This work will assist designers with the creative process of product form definition.
Keywords: Design methodology; concept generation; inspiration
Area: ESCI
|