|
Affolter, C., Kedzierska, J., Vielma, T., Weisse, B., & Aiyangar, A. (2020). Estimating lumbar passive stiffness behaviour from subject-specific finite element models and in vivo 6DOF kinematics. J. Biomech., 102, 11 pp.
Abstract: Passive rotational stiffness of the osseo-ligamentous spine is an important input parameter for estimating in-vivo spinal loading using musculoskeletal models. These data are typically acquired from cadaveric testing. Increasingly, they are also estimated from subject-specific imaging-based finite element (FE) models, which are typically built from CT/MR data obtained in supine position and employ pure rotation kinematics. We explored the sensitivity of FE-based lumbar passive rotational stiffness to two aspects of functional in-vivo kinematics: (a) passive strain changes from supine to upright standing position, and (b) in-vivo coupled translation-rotation kinematics. We developed subject-specific FE models of four subjects' L4L5 segments from supine CT images. Sagittally symmetric flexion was simulated in two ways: (i) pure flexion up to 12 degrees under a 500 N follower load directly from the supine pose. (ii) First, a displacement-based approach was implemented to attain the upright pose, as measured using Dynamic Stereo X-ray (DSX) imaging. We then simulated in-vivo flexion using DSX imaging-derived kinematics. Datasets from weight-bearing motion with three different external weights [(4.5 kg), (9.1 kg), (13.6 kg)] were used. Accounting for supine-upright motion generated compressive pre-loads approximate to 468 N (+/- 188 N) and a “pre-torque” approximate to 2.5 Nm (+/- 2.2 Nm), corresponding to 25% of the reaction moment at 10 degrees flexion (case (i)). Rotational stiffness estimates from DSX-based coupled translation-rotation kinematics were substantially higher compared to pure flexion. Reaction Moments were almost 90% and 60% higher at 5 degrees and 10 degrees of L4L5 flexion, respectively. Within-subject differences in rotational stiffness based on external weight were small, although between-subject variations were large. (C) 2020 Elsevier Ltd. All rights reserved.
|
|
|
Cardenas, C., Guzman, F., Carmona, M., Munoz, C., Nilo, L., Labra, A., et al. (2020). Synthetic Peptides as a Promising Alternative to Control Viral Infections in Atlantic Salmon. Pathogens, 9(8), 600.
Abstract: Viral infections in salmonids represent an ongoing challenge for the aquaculture industry. Two RNA viruses, the infectious pancreatic necrosis virus (IPNV) and the infectious salmon anemia virus (ISAV), have become a latent risk without healing therapies available for either. In this context, antiviral peptides emerge as effective and relatively safe therapeutic molecules. Based on in silico analysis of VP2 protein from IPNV and the RNA-dependent RNA polymerase from ISAV, a set of peptides was designed and were chemically synthesized to block selected key events in their corresponding infectivity processes. The peptides were tested in fish cell lines in vitro, and four were selected for decreasing the viral load: peptide GIM182 for IPNV, and peptides GIM535, GIM538 and GIM539 for ISAV. In vivo tests with the IPNV GIM 182 peptide were carried out using Salmo salar fish, showing a significant decrease of viral load, and proving the safety of the peptide for fish. The results indicate that the use of peptides as antiviral agents in disease control might be a viable alternative to explore in aquaculture.`
|
|
|
Cordova, S., Canizares, C. A., Lorca, A., & Olivares, D. E. (2023). Aggregate Modeling of Thermostatically Controlled Loads for Microgrid Energy Management Systems. IEEE Trans. Smart Grid, 14(6), 4169–4181.
Abstract: Second-to-second renewable power fluctuations can severely hinder the frequency regulation performance of modern isolated microgrids, as these typically have a low inertia and significant renewable energy integration. In this context, the present paper studies the coordinated control of Thermostatically Controlled Loads (TCLs) for managing short-term power imbalances, and their integration in microgrid operations through the use of aggregate TCL models. In particular, two computationally efficient and accurate aggregate TCL models are developed: a virtual battery model representing the aggregate flexibility of TCLs considering solar irradiance heat gains and wall/floor heat transfers, and a frequency transient model representing the aggregate dynamics of a TCL collection considering communication delays and the presence of model uncertainty and time-variability. The proposed aggregate TCL models are then used to design a practical Energy Management System (EMS) integrating TCL flexibility, and study the impact of TCL integration on microgrid operation and frequency control. Computational experiments using detailed frequency transient and thermal dynamic models are presented, demonstrating the accuracy of the proposed aggregate TCL models, as well as the economic and reliability benefits resulting from using these aggregate models to integrate TCLs in microgrid operations.
|
|
|
Faes, M. G. R., & Valdebenito, M. A. (2021). Fully decoupled reliability-based optimization of linear structures subject to Gaussian dynamic loading considering discrete design variables. Mech. Syst. Sig. Process., 156, 107616.
Abstract: Reliability-based optimization (RBO) offers the possibility of finding an optimal design for a system according to a prescribed criterion while explicitly taking into account the effects of uncertainty. However, due to the necessity of solving simultaneously a reliability problem nested in an optimization procedure, the corresponding computational cost is usually high, impeding the applicability of the methods. This computational cost is even further enlarged when one or several design variables must belong to a discrete set, due to the requirement of resorting to integer programming optimization algorithms. To alleviate this issue, this contribution proposes a fully decoupled approach for a specific class of problems, namely minimization of the failure probability of a linear system subjected to an uncertain dynamic load of the Gaussian type, under the additional constraint that the design variables are integer-valued. Specifically, by using the operator norm framework, as developed by the authors in previous work, this paper shows that by reducing the RBO problem with discrete design variables to the solution of a single deterministic optimization problem followed by a single reliability analysis, a large gain in numerical efficiency can be obtained without compromising the accuracy of the resulting optimal design. The application and capabilities of the proposed approach are illustrated by means of three examples.
|
|
|
Fina, M., Lauff, C., Faes, M. G. R., Valdebenito, M. A., Wagner, W., & Freitag, S. (2023). Bounding imprecise failure probabilities in structural mechanics based on maximum standard deviation. Struct. Saf., 101, 102293.
Abstract: This paper proposes a framework to calculate the bounds on failure probability of linear structural systems whose performance is affected by both random variables and interval variables. This kind of problems is known to be very challenging, as it demands coping with aleatoric and epistemic uncertainty explicitly. Inspired by the framework of the operator norm theorem, it is proposed to consider the maximum standard deviation of the structural response as a proxy for detecting the crisp values of the interval parameters, which yield the bounds of the failure probability. The scope of application of the proposed approach comprises linear structural systems, whose properties may be affected by both aleatoric and epistemic uncertainty and that are subjected to (possibly imprecise) Gaussian loading. Numerical examples indicate that the application of such proxy leads to substantial numerical advantages when compared to a traditional double-loop approach for coping with imprecise failure probabilities. In fact, the proposed framework allows to decouple the propagation of aleatoric and epistemic uncertainty.
|
|
|
Jerez, D. J., Jensen, H. A., Valdebenito, M. A., Misraji, M. A., Mayorga, F., & Beer, M. (2022). On the use of Directional Importance Sampling for reliability-based design and optimum design sensitivity of linear stochastic structures. Probabilistic Eng. Mech., 70, 103368.
Abstract: This contribution focuses on reliability-based design and optimum design sensitivity of linear dynamical structural systems subject to Gaussian excitation. Directional Importance Sampling (DIS) is implemented for reliability assessment, which allows to obtain first-order derivatives of the failure probabilities as a byproduct of the sampling process. Thus, gradient-based solution schemes can be adopted by virtue of this feature. In particular, a class of feasible-direction interior point algorithms are implemented to obtain optimum designs, while a direction-finding approach is considered to obtain optimum design sensitivity measures as a post -processing step of the optimization results. To show the usefulness of the approach, an example involving a building structure is studied. Overall, the reliability sensitivity analysis framework enabled by DIS provides a potentially useful tool to address a practical class of design optimization problems.
|
|
|
Meyer, L. A., Johnson, M. G., Cullen, D. M., Vivanco, J. F., Blank, R. D., Ploeg, H. L., et al. (2016). Combined exposure to big endothelin-1 and mechanical loading in bovine sternal cores promotes osteogenesis. Bone, 85, 115–122.
Abstract: Increased bone formation resulting from mechanical loading is well documented; however, the interactions of the mechanotransduction pathways are less well understood. Endothelin-1, a ubiquitous autocrine/paracrine signaling molecule promotes osteogenesis in metastatic disease. In the present study, it was hypothesized that exposure to big endothelin-1 (big ET1) and/or mechanical loading would promote osteogenesis in ex vivo trabecular bone cores. In a 2 x 2 factorial trial of daily mechanical loading (-2000 μepsilon,120 cycles daily, “jump” waveform) and big ET1 (25 ng/mL), 48 bovine sternal trabecular bone cores were maintained in bioreactor chambers for 23 days. The bone cores' response to the treatment stimuli was assessed with percent change in core apparent elastic modulus (Delta E-app), static and dynamic histomorphometry, and prostaglandin E2 (PGE2) secretion. Two-way ANOVA with a post hoc Fisher's LSD test found no significant treatment effects on Delta E-app (p = 0.25 and 0.51 for load and big ET1, respectively). The Delta E-app in the “no load + big ET1” (CE, 13 +/- 12.2%, p = 0.56), “load + no big ET1” (LC, 17 +/- 3.9%, p = 0.14) and “load + big ETI” (LE, 19 +/- 4.2%, p = 0.13) treatment groups were not statistically different than the control group (CC, 3.3% +/- 8.6%). Mineralizing surface (MS/BS), mineral apposition (MAR) and bone formation rates (BFR/BS) were significantly greater in LE than CC (p = 0.037, 0.0040 and 0.019, respectively). While the histological bone formation markers in LC trended to be greater than CC (p = 0.055, 0.11 and 0.074, respectively) there was no difference between CE and CC (p = 0.61, 0.50 and 0.72, respectively). Cores in LE and LC had more than 50% greater MS/BS (p = 0.037, p = 0.055 respectively) and MAR (p = 0.0040, p = 0.11 respectively) than CC. The BFR/BS was more than two times greater in LE (p = 0.019) and LC (p = 0.074) than CC. The PGE2 levels were elevated at 8 days post-osteotomy in all groups and the treatment groups remained elevated compared to the CC group on days 15,19 and 23. The data suggest that combined exposure to big ET1 and mechanical loading results in increased osteogenesis as measured in biomechanical, histomorphometric and biochemical responses. (C) 2016 Elsevier Inc. All rights reserved.
|
|
|
Nieto-Jimenez, C., Sanchez, R., Besomi, M., & Naranjo-Orellana, J. (2023). ONE-YEAR FOLLOW-UP WITH HEART RATE VARIABILITY IN TRAIL RUNNERS. Rev. Int. Med. Cienc. Act. Fis. Deporte, 23(89), 446–457.
Abstract: -This study aimed to analyze measures of heart rate variability (HRV) to provide reference values in Ultra-Trail Running (UTR) athletes. Sixteen Chilean UTR were monitored with 5-minute baseline wake-up records during a one-year follow-up during which they maintained their usual training, competition and rest activities. As a variable to evaluate parasympathetic activity the RMSSD (square root of the mean value of the sum of the squared differences of all successive RR intervals) was analyzed. In addition, the Stress Score (SS) was calculated as an indicator of sympathetic activity. The data provided are reference baseline HRV values for UTR through a percentile distribution, which can be particularly useful when HRV is used to control training loads in UTR athletes.
|
|
|
Osorio-Valenzuela, L., Pereira, J., Quezada, F., & Vasquez, O. C. (2019). Minimizing the number of machines with limited workload capacity for scheduling jobs with interval constraints. Appl. Math. Model., 74, 512–527.
Abstract: In this paper, we consider a parallel machine scheduling problem in which machines have a limited workload capacity and jobs have deadlines and release dates. The problem is motivated by the operation of energy storage management systems for microgrids under emergency conditions and generalizes some problems that have already been studied in the literature for their theoretical value. In this work, we propose heuristic and exact algorithms to solve the problem. The heuristics are adaptations of classical bin packing heuristics in which additional conditions on the feasibility of a solution are imposed, whereas the exact method is a branch-and-price approach. The results show that the branch-andprice approach is able to optimally solve random instances with up to 250 jobs within a time limit of one hour, while the heuristic procedures provide near optimal solution within reduced running times. Finally, we also provide additional complexity results for a special case of the problem. (C) 2019 Elsevier Inc. All rights reserved.
|
|
|
Petrou, K., Procopiou, A. T., Gutierrez-Lagos, L., Liu, M. C. Z., Ochoa, L. F., Langstaff, T., et al. (2021). Ensuring Distribution Network Integrity Using Dynamic Operating Limits for Prosumers. IEEE Trans. Smart Grid, 12(5), 3877–3888.
Abstract: The number of residential consumers with solar PV and batteries, aka prosumers, has been increasing in recent years. Incentives now exist for prosumers to operate their batteries in more profitable ways than self-consumption mode. However, this can increase prosumer exports or imports, resulting in power flows that can lead to voltage and thermal limit violations in distribution networks. This work proposes a framework for Distribution Network Operators (DNOs) to ensure the integrity of MV-LV networks by using dynamic operating limits for prosumers. Periodically, individual prosumers send their intended operation (net exports/imports) as determined by their local control to the DNO who then assesses network integrity using smart meter data and a power flow engine. If a potential violation is detected, their maximum operating limits are determined based on a three-phase optimal power flow that incorporates network constraints and fairness aspects. A real Australian MV feeder with realistically modelled LV networks and 4,500+ households is studied, where prosumers' local controls operate based on energy prices. Time-series results demonstrate that the proposed framework can help DNOs ensure network integrity and fairness across prosumers. Furthermore, it unlocks larger profitability for prosumers compared with the use the 5kW fixed export limit adopted in Australia.
|
|
|
Quezada, F. A., Navarro, C. A., Romero, M., & Aguilera, C. (2023). Modeling GPU Dynamic Parallelism for self similar density workloads. Future Gener. Comput. Syst., 145, 239–253.
Abstract: Dynamic Parallelism (DP) is a GPU programming abstraction that can make parallel computation more efficient for problems that exhibit heterogeneous workloads. With DP, GPU threads can launch kernels with more threads, recursively, producing a subdivision effect where resources are focused on the regions that exhibit more parallel work. Doing an optimal subdivision process is not trivial, as the combination of different parameters play a relevant role in the final performance of DP. Also, the current programming abstraction of DP relies on kernel recursion, which has performance overhead. This work presents a new subdivision cost model for problems that exhibit self similar density (SSD) workloads, useful for finding efficient subdivision schemes. Also, a new subdivision implementation free of recursion overhead is presented, named Adaptive Serial Kernels (ASK). Using the Mandelbrot set as a case study, the cost model shows that optimal performance is achieved when using {g -32, r -2, B -32} for the initial subdivision, recurrent subdivision and stopping size, respectively. Experimental results agree with the theoretical parameters, confirming the usability of the cost model. In terms of performance, the ASK approach runs up to -60% faster than DP in the Mandelbrot set, and up to 12x faster than a basic exhaustive implementation, whereas DP is up to 7.5x faster. In terms of energy efficiency, ASK is up to -2x and -20x more energy efficient than DP and the exhaustive approach, respectively. These results put the subdivision cost model and the ASK approach as useful tools for analyzing the potential improvement of subdivision based approaches and for developing more efficient GPU-based libraries or fine-tune specific codes in research teams.
|
|
|
Salgado, M., Negrete-Pincetic, M., Lorca, A., & Olivares, D. (2021). A Low-complexity Home Energy Management System for Electricity Demand Side Aggregators. Appl. Energy, 2021(294), 116985.
Abstract: A low-complexity decision model for a Home Energy Management System is proposed to follow demand trajectory sets received from a Demand Side Response aggregator. This model is designed to reduce its computational complexity and being solved by low performance processors using available Single-Board Computers as a proof of concept. To decrease the computational complexity is proposed a two-stage model, where the first stage evaluates the hourly appliance scheduling using a relaxed set of restrictions, and the second stage evaluates a reduced set of appliances in a intra-hourly interval with a detailed characterization of the scheduled appliance properties. Simulations results show the effectiveness of the proposed algorithm to follow trajectories for different sets of home appliances and operational conditions. For the studied cases, the model presents deviations in the demand for the 3.2% of the cases in the first-stage and a 12% for the second-stage model. Results show that the proposed model can schedule available appliances according to the demand aggregator requirements in a limited solving time with diverse hardware.
|
|
|
Salgado, M., Negrete-Pincetic, M., Lorca, A., & Olivares, D. (2021). A low-complexity decision model for home energy management systems. Appl. Energy, 294, 116985.
Abstract: A low-complexity decision model for a Home Energy Management System is proposed to follow demand trajectory sets received from a Demand Side Response aggregator. This model is designed to reduce its computational complexity and being solved by low performance processors using available Single-Board Computers as a proof of concept. To decrease the computational complexity is proposed a two-stage model, where the first stage evaluates the hourly appliance scheduling using a relaxed set of restrictions, and the second stage evaluates a reduced set of appliances in a intra-hourly interval with a detailed characterization of the scheduled appliance properties. Simulations results show the effectiveness of the proposed algorithm to follow trajectories for different sets of home appliances and operational conditions. For the studied cases, the model presents deviations in the demand for the 3.2% of the cases in the first-stage and a 12% for the second-stage model. Results show that the proposed model can schedule available appliances according to the demand aggregator requirements in a limited solving time with diverse hardware.
|
|
|
Sanchez, R., & Villena, M. (2020). Comparative evaluation of wearable devices for measuring elevation gain in mountain physical activities. Proc. Inst. Mech. Eng. Part P-J. Sport. Eng. Technol., 234(4), 312–319.
Abstract: The aim of this article is to examine the validity of elevation gain measures in mountain activities, such as hiking and mountain running, using different wearable devices and post-processing procedures. In particular, a total of 202 efforts were recorded and evaluated using three standard devices: GPS watch, GPS watch with barometric altimeter, and smartphone. A benchmark was based on orthorectified aerial photogrammetric survey conducted by the Chilean Air Force. All devices presented considerable elevation gain measuring errors, where the barometric device consistently overestimated elevation gain, while the GPS devices consistently underestimated elevation gain. The incorporation of secondary information in the post-processing can substantially improve the elevation gain measuring accuracy independently of the device and altitude measuring technology, reducing the error from -5% to -1%. These results could help coaches and athletes correct elevation gain estimations using the proposed technique, which would serve as better estimates of physical workload in mountain physical activities.
|
|
|
Tapia, T., Lorca, A., Olivares, D., Negrete-Pincetic, M., & Lamadrid, A. J. (2021). A robust decision-support method based on optimization and simulation for wildfire resilience in highly renewable power systems. Eur. J. Oper. Res., 294(2), 723–733.
Abstract: Wildfires can pose a major threat to the secure operation of power networks. Chile, California, and Australia have suffered from recent wildfires that have induced considerable power supply cuts. Further, as power systems move to a significant integration of variable renewable energy sources, successfully managing the impact of wildfires on the power supply can become even more challenging due to the joint uncertainty in wildfire trajectories and the power injections from wind and solar farms. Motivated by this, this paper develops a practical decision-support approach that concatenates a stochastic wildfire simulation method with an attacker-defender model that aims to find a worst-case realization for (i) transmission line and generator contingencies, out of those that can potentially be affected by a given wildfire scenario, and for (ii) wind and solar power trajectories, based on a max-min structure where the inner min problem represents a best adaptive response on generator dispatch actions. Further, this paper proposes an evaluation framework to assess the power supply security of various power system topology configurations, under the assumption of limited transmission switching capabilities, and based on the simulation of several wildfire evolution scenarios. Extensive computational experiments are carried out on two representations of the Chilean power network with up to 278 buses, showing the practical effectiveness of the proposed approach for enhancing wildfire resilience in highly renewable power systems.
|
|
|
Valdebenito, M. A., Misraji, M. A., Jensen, H. A., & Mayorga, C. F. (2021). Sensitivity estimation of first excursion probabilities of linear structures subject to stochastic Gaussian loading. Comput. Struct., 248, 106482.
Abstract: This contribution focuses on evaluating the sensitivity associated with first excursion probabilities of linear structural systems subject to stochastic Gaussian loading. The sensitivity measure considered is the partial derivative of the probability with respect to parameters that affect the structural response, such as dimensions of structural elements. The actual calculation of the sensitivity demands solving high dimensional integrals over hypersurfaces, which can be challenging from a numerical viewpoint. Hence, sensitivity evaluation is cast within the context of a reliability analysis that is conducted with Directional Importance Sampling. In this way, the sought sensitivity is obtained as a byproduct of the calculation of the failure probability, where the post-processing step demands performing a sensitivity analysis of the unit impulse response functions of the structure. Thus, the sensitivity is calculated using sampling by means of an estimator, whose precision can be quantified in terms of its standard deviation. Numerical examples involving both small- and large-scale structural models illustrate the procedure for probability sensitivity estimation. (C) 2021 Elsevier Ltd. All rights reserved.
|
|