
Aiyangar, A. K., Vivanco, J., Au, A. G., Anderson, P. A., Smith, E. L., & Ploeg, H. L. (2014). Dependence of Anisotropy of Human Lumbar Vertebral Trabecular Bone on Quantitative Computed TomographyBased Apparent Density. J. Biomech. Eng.Trans. ASME, 136(9), 10 pp.
Abstract: Most studies investigating human lumbar vertebral trabecular bone (HVTB) mechanical propertydensity relationships have presented results for the superiorinferior (SI), or “ onaxis” direction. Equivalent, directly measured data from mechanical testing in the transverse (TR) direction are sparse and quantitative computed tomography (QCT) densitydependent variations in the anisotropy ratio of HVTB have not been adequately studied. The current study aimed to investigate the dependence of HVTB mechanical anisotropy ratio on QCT density by quantifying the empirical relationships between QCTbased apparent density of HVTB and its apparent compressive mechanical propertieselastic modulus (Eapp), yield strength (sigma(y)), and yield strain (epsilon(y))in the SI and TR directions for future clinical QCTbased continuum finite element modeling of HVTB. A total of 51 cylindrical cores (33 axial and 18 transverse) were extracted from four L1 human lumbar cadaveric vertebrae. Intact vertebrae were scanned in a clinical resolution computed tomography (CT) scanner prior to specimen extraction to obtain QCT density, rho(CT). Additionally, physically measured apparent density, computed as ash weight over wet, bulk volume, rho(app), showed significant correlation with rho(CT) [rho(CT) = 1.0568 x rho(app), r = 0.86]. Specimens were compression tested at room temperature using the Zetos bone loading and bioreactor system. Apparent elastic modulus (Eapp) and yield strength (sigma(y)) were linearly related to the rho(CT) in the axial direction [ESI = 1493.8 x (rho(CT)), r = 0.77, p < 0.01; sigma(Y,SI) = 6.9 x (rho(CT)) = 0.13, r = 0.76, p < 0.01] while a powerlaw relation provided the best fit in the transverse direction [ETR 3349.1 x (rho(CT))(1.94), r = 0.89, p < 0.01; sigma(Y,TR) 18.81 x (rho(CT)) 1.83, r = 0.83, p < 0.01]. No significant correlation was found between epsilon(y) and rho(CT) in either direction. Eapp and sigma(y) in the axial direction were larger compared to the transverse direction by a factor of 3.2 and 2.3, respectively, on average. Furthermore, the degree of anisotropy decreased with increasing density. Comparatively, epsilon(y) exhibited only a mild, but statistically significant anisotropy: transverse strains were larger than those in the axial direction by 30%, on average. Ability to map apparent mechanical properties in the transverse direction, in addition to the axial direction, from CTbased densitometric measures allows incorporation of transverse properties in finite element models based on clinical CT data, partially offsetting the inability of continuum models to accurately represent trabecular architectural variations.



Barrera, J., HomemDeMello, T., Moreno, E., Pagnoncelli, B. K., & Canessa, G. (2016). Chanceconstrained problems and rare events: an importance sampling approach. Math. Program., 157(1), 153–189.
Abstract: We study chanceconstrained problems in which the constraints involve the probability of a rare event. We discuss the relevance of such problems and show that the existing samplingbased algorithms cannot be applied directly in this case, since they require an impractical number of samples to yield reasonable solutions. We argue that importance sampling (IS) techniques, combined with a Sample Average Approximation (SAA) approach, can be effectively used in such situations, provided that variance can be reduced uniformly with respect to the decision variables. We give sufficient conditions to obtain such uniform variance reduction, and prove asymptotic convergence of the combined SAAIS approach. As it often happens with IS techniques, the practical performance of the proposed approach relies on exploiting the structure of the problem under study; in our case, we work with a telecommunications problem with Bernoulli input distributions, and show how variance can be reduced uniformly over a suitable approximation of the feasibility set by choosing proper parameters for the IS distributions. Although some of the results are specific to this problem, we are able to draw general insights that can be useful for other classes of problems. We present numerical results to illustrate our findings.



Bernales, A., Reus, L., & Valdenegro, V. (2022). Speculative bubbles under supply constraints, background risk and investment fraud in the art market. J. Corp. Financ., Early Access.
Abstract: We examine the unexplored effects on art markets of artist death (asset supply constraints), collectors' wealth (background risk) and forgery risk (risk of investment fraud), under shortsale constraints and risk aversion. Speculative bubbles emerge and have the form of an option strangle (a put option and a call option), in which strike prices are affected by art supply constraints and the association of the artworks' emotional value with both collectors' wealth and forgery, while the options' underlying asset is the stochastic heterogeneous beliefs of agents. We show that speculative bubbles increase with four elements: art supply constraints; a more negative correlation between collectors' wealth and the artworks' emotional value; a more positive relationship between forgery and the artworks' emotional value; and more heterogeneous beliefs. These four sources of speculation increase the expected turnover rate; however, they also augment the variance of speculative bubbles, which generates price discounts (i.e. risk premiums) for holding artworks. Consequently, the net impact of speculation is not necessarily increased art prices. This study not only contributes to the art market literature, but also to studies about speculative bubbles in other financial markets under heterogeneous beliefs, shortsale constraints and riskaverse investors, since we additionally consider the simultaneous effect of asset supply constraints, investors' background risk and the risk of investment fraud.



Bertossi, L. (2021). Specifying and computing causes for query answers in databases via database repairs and repairprograms. Knowl. Inf. Syst., 63, 199–231.
Abstract: There is a recently established correspondence between database tuples as causes for query answers in databases and tuplebased repairs of inconsistent databases with respect to denial constraints. In this work, answerset programs that specify database repairs are used as a basis for solving computational and reasoning problems around causality in databases, including causal responsibility. Furthermore, causes are introduced also at the attribute level by appealing to an attributebased repair semantics that uses null values. Corresponding repairprograms are introduced, and used as a basis for computation and reasoning about attributelevel causes. The answerset programs are extended in order to capture causality under integrity constraints.



Bertossi, L. (2022). Declarative Approaches to Counterfactual Explanations for Classification. Theory Pract. Log. Program., Early Access.
Abstract: We propose answerset programs that specify and compute counterfactual interventions on entities that are input on a classification model. In relation to the outcome of the model, the resulting counterfactual entities serve as a basis for the definition and computation of causalitybased explanation scores for the feature values in the entity under classification, namely responsibility scores. The approach and the programs can be applied with blackbox models, and also with models that can be specified as logic programs, such as rulebased classifiers. The main focus of this study is on the specification and computation of best counterfactual entities, that is, those that lead to maximum responsibility scores. From them one can read off the explanations as maximum responsibility feature values in the original entity. We also extend the programs to bring into the picture semantic or domain knowledge. We show how the approach could be extended by means of probabilistic methods, and how the underlying probability distributions could be modified through the use of constraints. Several examples of programs written in the syntax of the DLV ASPsolver, and run with it, are shown.



Bolte, J., Hochart, A., & Pauwels, E. (2018). Qualification Conditions In Semialgebraic Programming. SIAM J. Optim., 28(2), 1867–1891.
Abstract: For an arbitrary finite family of semialgebraic/definable functions, we consider the corresponding inequality constraint set and we study qualification conditions for perturbations of this set. In particular we prove that all positive diagonal perturbations, save perhaps a finite number of them, ensure that any point within the feasible set satisfies the MangasarianFromovitz constraint qualification. Using the MilnorThom theorem, we provide a bound for the number of singular perturbations when the constraints are polynomial functions. Examples show that the order of magnitude of our exponential bound is relevant. Our perturbation approach provides a simple protocol to build sequences of “regular” problems approximating an arbitrary semialgebraic/definable problem. Applications to sequential quadratic programming methods and sum of squares relaxation are provided.



Bustamante, M., & Contreras, M. (2016). Multiasset BlackScholes model as a variable second class constrained dynamical system. Physica A, 457, 540–572.
Abstract: In this paper, we study the multiasset BlackScholes model from a structural point of view. For this, we interpret the multiasset BlackScholes equation as a multidimensional Schrodinger one particle equation. The analysis of the classical Hamiltonian and Lagrangian mechanics associated with this quantum model implies that, in this system, the canonical momentums cannot always be written in terms of the velocities. This feature is a typical characteristic of the constrained system that appears in the highenergy physics. To study this model in the proper form, one must apply Dirac's method for constrained systems. The results of the Dirac's analysis indicate that in the correlation parameters space of the multi assets model, there exists a surface (called the Kummer surface Sigma(K), where the determinant of the correlation matrix is null) on which the constraint number can vary. We study in detail the cases with N = 2 and N = 3 assets. For these cases, we calculate the propagator of the multiasset BlackScholes equation and show that inside the Kummer Sigma(K) surface the propagator is well defined, but outside Sigma(K) the propagator diverges and the option price is not well defined. On Sigma(K) the propagator is obtained as a constrained path integral and their form depends on which region of the Kummer surface the correlation parameters lie. Thus, the multiasset BlackScholes model is an example of a variable constrained dynamical system, and it is a new and beautiful property that had not been previously observed. (C) 2016 Elsevier B.V. All rights reserved.



Canessa, G., Gallego, J. A., Ntaimo, L., & Pagnoncelli, B. K. (2019). An algorithm for binary linear chanceconstrained problems using IIS. Comput. Optim. Appl., 72(3), 589–608.
Abstract: We propose an algorithm based on infeasible irreducible subsystems to solve binary linear chanceconstrained problems with random technology matrix. By leveraging on the problem structure we are able to generate good quality upper bounds to the optimal value early in the algorithm, and the discrete domain is used to guide us efficiently in the search of solutions. We apply our methodology to individual and joint binary linear chanceconstrained problems, demonstrating the ability of our approach to solve those problems. Extensive numerical experiments show that, in some cases, the number of nodes explored by our algorithm is drastically reduced when compared to a commercial solver.



Caniupan, M., Bravo, L., & Hurtado, C. A. (2012). Repairing inconsistent dimensions in data warehouses. Data Knowl. Eng., 7980, 17–39.
Abstract: A dimension in a data warehouse (DW) is a set of elements connected by a hierarchical relationship. The elements are used to view summaries of data at different levels of abstraction. In order to support an efficient processing of such summaries, a dimension is usually required to satisfy different classes of integrity constraints. In scenarios where the constraints properly capture the semantics of the DW data, but they are not satisfied by the dimension, the problem of repairing (correcting) the dimension arises. In this paper, we study the problem of repairing a dimension in the context of two main classes of integrity constraints: strictness and covering constraints. We introduce the notion of minimal repair of a dimension: a new dimension that is consistent with respect to the set of integrity constraints, which is obtained by applying a minimal number of updates to the original dimension. We study the complexity of obtaining minimal repairs, and show how they can be characterized using Datalog programs with weak constraints under the stable model semantics. (c) 2012 Elsevier B.V. All rights reserved.



Carbonnel, C., Romero, M., & Zivny, S. (2020). PointWidth and MaxCSPs. ACM Trans. Algorithms, 16(4), 28 pp.
Abstract: The complexity of (unboundedarity) MaxCSPs under structural restrictions is poorly understood. The two most general hypergraph properties known to ensure tractability of MaxCSPs, betaacyclicity and bounded (incidence) MIMwidth, are incomparable and lead to very different algorithms. We introduce the framework of point decompositions for hypergraphs and use it to derive a new sufficient condition for the tractability of (structurally restricted) MaxCSPs, which generalises both bounded MIMwidth and betaacyclicity. On the way, we give a new characterisation of bounded MIMwidth and discuss other hypergraph properties which are relevant to the complexity of MaxCSPs, such as betahypertreewidth.



Cho, A. D., Carrasco, R. A., & Ruz, G. A. (2022). Improving Prescriptive Maintenance by Incorporating PostPrognostic Information Through Chance Constraints. IEEE Access, 10, 55924–55932.
Abstract: Maintenance is one of the critical areas in operations in which a careful balance between preventive costs and the effect of failures is required. Thanks to the increasing data availability, decisionmakers can now use models to better estimate, evaluate, and achieve this balance. This work presents a maintenance scheduling model which considers prognostic information provided by a predictive system. In particular, we developed a prescriptive maintenance system based on runtofailure signal segmentation and a Long Short Term Memory (LSTM) neural network. The LSTM network returns the prediction of the remaining useful life when a fault is present in a component. We incorporate such predictions and their inherent errors in a decision support system based on a stochastic optimization model, incorporating them via chance constraints. These constraints control the number of failed components and consider the physical distance between them to reduce sparsity and minimize the total maintenance cost. We show that this approach can compute solutions for relatively large instances in reasonable computational time through experimental results. Furthermore, the decisionmaker can identify the correct operating point depending on the balance between costs and failure probability.



Contreras, G. M. (2014). Stochastic volatility models at rho = +/ 1 as second class constrained Hamiltonian systems. Physica A, 405, 289–302.
Abstract: The stochastic volatility models used in the financial world are characterized, in the continuoustime case, by a set of two coupled stochastic differential equations for the underlying asset price S and volatility sigma. In addition, the correlations of the two Brownian movements that drive the stochastic dynamics are measured by the correlation parameter rho (1 <= rho <= 1). This stochastic system is equivalent to the FokkerPlanck equation for the transition probability density of the random variables S and sigma. Solutions for the transition probability density of the Heston stochastic volatility model (Heston, 1993) were explored in Dragulescu and Yakovenko (2002), where the fundamental quantities such as the transition density itself, depend on rho in such a manner that these are divergent for the extreme limit rho = +/ 1. The same divergent behavior appears in Hagan et al. (2002), where the probability density of the SABR model was analyzed. In an option pricing context, the propagator of the bidimensional BlackScholes equation was obtained in Lemmens et al. (2008) in terms of the path integrals, and in this case, the propagator diverges again for the extreme values rho = +/ 1. This paper shows that these similar divergent behaviors are due to a universal property of the stochastic volatility models in the continuum: all of them are second class constrained systems for the most extreme correlated limit rho = +/ 1. In this way, the stochastic dynamics of the rho = +/ 1 cases are different of the rho (1 <= rho <= 1) case, and it cannot be obtained as a continuous limit from the rho not equal +/ 1 regimen. This conclusion is achieved by considering the FokkerPlanck equation or the bidimensional BlackScholes equation as a Euclidean quantum Schrodinger equation. Then, the analysis of the underlying classical mechanics of the quantum model, implies that stochastic volatility models at rho = +/ 1 correspond to a constrained system. To study the dynamics in an appropriate form, Dirac's method for constrained systems (Dirac, 1958, 1967) must be employed, and Dirac's analysis reveals that the constraints are second class. In order to obtain the transition probability density or the option price correctly, one must evaluate the propagator as a constrained Hamiltonian pathintegral (Henneaux and Teitelboim, 1992), in a similar way to the high energy gauge theory models. In fact, for all stochastic volatility models, after integrating over momentum variables, one obtains an effective Euclidean Lagrangian path integral over the volatility alone. The role of the second class constraints is determining the underlying asset price S completely in terms of volatility, so it plays no role in the path integral. In order to examine the effect of the constraints on the dynamics for both extreme limits, the probability density function is evaluated by using semiclassical arguments, in an analogous manner to that developed in Hagan et al. (2002), for the SABR model. (C) 2014 Elsevier B.V. All rights reserved.



Contreras, M., & Hojman, S. A. (2014). Option pricing, stochastic volatility, singular dynamics and constrained path integrals. Physica A, 393, 391–403.
Abstract: Stochastic volatility models have been widely studied and used in the financial world. The Heston model (Heston, 1993) [7] is one of the best known models to deal with this issue. These stochastic volatility models are characterized by the fact that they explicitly depend on a correlation parameter p which relates the two Brownian motions that drive the stochastic dynamics associated to the volatility and the underlying asset. Solutions to the Heston model in the context of option pricing, using a path integral approach, are found in Lemmens et al. (2008) [21] while in Baaquie (2007,1997) [12,13] propagators for different stochastic volatility models are constructed. In all previous cases, the propagator is not defined for extreme cases rho = +/ 1. It is therefore necessary to obtain a solution for these extreme cases and also to understand the origin of the divergence of the propagator. In this paper we study in detail a general class of stochastic volatility models for extreme values rho = +/ 1 and show that in these two cases, the associated classical dynamics corresponds to a system with second class constraints, which must be dealt with using Dirac's method for constrained systems (Dirac, 1958,1967) [22,23] in order to properly obtain the propagator in the form of a Euclidean Hamiltonian path integral (Henneaux and Teitelboim, 1992) [25]. After integrating over momenta, one gets an Euclidean Lagrangian path integral without constraints, which in the case of the Heston model corresponds to a path integral of a repulsive radial harmonic oscillator. In all the cases studied, the price of the underlying asset is completely determined by one of the second class constraints in terms of volatility and plays no active role in the path integral. (C) 2013 Elsevier B.V. All rights reserved.



Contreras, M., & Pena, J. P. (2019). The quantum dark side of the optimal control theory. Physica A, 515, 450–473.
Abstract: In a recent article, a generic optimal control problem was studied from a physicist's point of view (Contreras et al. 2017). Through this optic, the Pontryagin equations are equivalent to the Hamilton equations of a classical constrained system. By quantizing this constrained system, using the right ordering of the operators, the corresponding quantum dynamics given by the Schrodinger equation is equivalent to that given by the HamiltonJacobiBellman equation of Bellman's theory. The conclusion drawn there were based on certain analogies between the equations of motion of both theories. In this paper, a closer and more detailed examination of the quantization problem is carried out, by considering three possible quantization procedures: right quantization, left quantization, and Feynman's path integral approach. The Bellman theory turns out to be the classical limit h > 0 of these three different quantum theories. Also, the exact relation of the phase S(x, t) of the wave function Psi(x, t) = e(i/hS(x,t)) of the quantum theory with Bellman's cost function J(+)(x, t) is obtained. In fact, S(x, t) satisfies a 'conjugate' form of the HamiltonJacobiBellman equation, which implies that the cost functional J(+)(x, t) must necessarily satisfy the usual HamiltonJacobiBellman equation. Thus, the Bellman theory effectively corresponds to a quantum view of the optimal control problem. (C) 2018 Elsevier B.V. All rights reserved.



Contreras, M., Pellicer, R., & Villena, M. (2017). Dynamic optimization and its relation to classical and quantum constrained systems. Physica A, 479, 12–25.
Abstract: We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two secondclass constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closedloop lambdastrategy, the optimality condition for the action gives a consistency relation, which is associated to the HamiltonJacobiBellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Psi (x, t) = e(is(x,t)) in the quantum Schrodinger equation, a nonlinear partial equation is obtained for the S function. For the righthand side quantization, this is the HamiltonJacobiBellman equation, when S(x, t) is identified with the optimal value function. Thus, the HamiltonJacobiBellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem. (C) 2017 Elsevier B.V. All rights reserved.



Cortes, M. P., Mendoza, S. N., Travisany, D., Gaete, A., Siegel, A., Cambiazo, V., et al. (2017). Analysis of Piscirickettsia salmonis Metabolism Using GenomeScale Reconstruction, Modeling, and Testing. Front. Microbiol., 8, 15 pp.
Abstract: Piscirickettsia salmonis is an intracellular bacterial fish pathogen that causes piscirickettsiosis, a disease with highly adverse impact in the Chilean salmon farming industry. The development of effective treatment and control methods for piscireckttsiosis is still a challenge. To meet it the number of studies on P. salmonis has grown in the last couple of years but many aspects of the pathogen's biology are still poorly understood. Studies on its metabolism are scarce and only recently a metabolic model for reference strain LF89 was developed. We present a new genomescale model for P. salmonis LF89 with more than twice as many genes as in the previous model and incorporating specific elements of the fish pathogen metabolism. Comparative analysis with models of different bacterial pathogens revealed a lower flexibility in P. salmonis metabolic network. Through constraintbased analysis, we determined essential metabolites required for its growth and showed that it can benefit from different carbon sources tested experimentally in new defined media. We also built an additional model for strain A115972, and together with an analysis of P. salmonis pangenome, we identified metabolic features that differentiate two main species clades. Both models constitute a knowledgebase for P. salmonis metabolism and can be used to guide the efficient culture of the pathogen and the identification of specific drug targets.



Donoso, R. A., Ruiz, D., GarateCastro, C., Villegas, P., GonzalezPastor, J. E., de Lorenzo, V., et al. (2021). Identification of a selfsufficient cytochrome P450 monooxygenase from Cupriavidus pinatubonensis JMP134 involved in 2hydroxyphenylacetic acid catabolism, via homogentisate pathway. Microb. Biotechnol., 14(5), 1944–1960.
Abstract: The selfsufficient cytochrome P450 RhF and its homologues belonging to the CYP116B subfamily have attracted considerable attention due to the potential for biotechnological applications based in their ability to catalyse an array of challenging oxidative reactions without requiring additional protein partners. In this work, we showed for the first time that a CYP116B selfsufficient cytochrome P450 encoded by the ohpA gene harboured by Cupriavidus pinatubonensis JMP134, a betaproteobacterium model for biodegradative pathways, catalyses the conversion of 2hydroxyphenylacetic acid (2HPA) into homogentisate. Mutational analysis and HPLC metabolite detection in strain JMP134 showed that 2HPA is degraded through the wellknown homogentisate pathway requiring a 2HPA 5hydroxylase activity provided by OhpA, which was additionally supported by heterologous expression and enzyme assays. The ohpA gene belongs to an operon including also ohpT, coding for a substratebinding subunit of a putative transporter, whose expression is driven by an inducible promoter responsive to 2HPA in presence of a predicted OhpR transcriptional regulator. OhpA homologues can be found in several genera belonging to Actinobacteria and alpha, beta and gammaproteobacteria lineages indicating a widespread distribution of 2HPA catabolism via homogentisate route. These results provide first time evidence for the natural function of members of the CYP116B selfsufficient oxygenases and represent a significant input to support novel kinetic and structural studies to develop cytochrome P450based biocatalytic processes.



Espinoza, D., Goycoolea, M., & Moreno, E. (2015). The precedence constrained knapsack problem: Separating maximally violated inequalities. Discret Appl. Math., 194, 65–80.
Abstract: We consider the problem of separating maximally violated inequalities for the precedence constrained knapsack problem. Though we consider maximally violated constraints in a very general way, special emphasis is placed on induced cover inequalities and induced clique inequalities. Our contributions include a new partial characterization of maximally violated inequalities, a new safe shrinking technique, and new insights on strengthening and lifting. This work follows on the work of Boyd (1993), Park and Park (1997), van de Leensel et al. (1999) and Boland et al. (2011). Computational experiments show that our new techniques and insights can be used to significantly improve the performance of cutting plane algorithms for this problem. (C) 2015 Elsevier B.V. All rights reserved.



Fernandez, M., Munoz, F. D., & Moreno, R. (2020). Analysis of imperfect competition in natural gas supply contracts for electric power generation: A closedloop approach. Energy Econ., 87, 15 pp.
Abstract: The supply of natural gas is generally based on contracts that are signed prior to the use of this fuel for power generation. Scarcity of natural gas in systems where a share of electricity demand is supplied with gas turbines does not necessarily imply demand rationing, because most gas turbines can still operate with diesel when natural gas is not available. However, scarcity conditions can lead to electricity price spikes, with welfare effects for consumers and generation firms. We develop a closedloop equilibrium model to evaluate if generation firms have incentives to contract or import the sociallyoptimal volumes of natural gas to generate electricity. We consider a perfectlycompetitive electricity market, where all firms act as pricetakers in the short term, but assume that only a small number of firms own gas turbines and procure natural gas from, for instance, foreign suppliers in liquefied form. We illustrate an application of our model using a network reduction of the electric power system in Chile, considering two strategic firms that make annual decisions about natural gas imports in discrete quantities. We also assume that strategic firms compete in the electricity market with a set of competitive firms do not make strategic decisions about natural gas imports (i.e., a competitive fringe). Our results indicate that strategic firms could have incentives to sign natural gas contracts for volumes that are much lower than the sociallyoptimal ones, which leads to supernormal profits for these firms in the electricity market. Yet, this effect is rather sensitive to the price of natural gas. A high price of natural gas eliminates the incentives of generation firms to exercise market power through natural gas contracts. (C) 2020 Elsevier B.V. All rights reserved.



Henriquez, P. A., & Ruz, G. A. (2019). Noise reduction for nearinfrared spectroscopy data using extreme learning machines. Eng. Appl. Artif. Intell., 79, 13–22.
Abstract: The near infrared (NIR) spectra technique is an effective approach to predict chemical properties and it is typically applied in petrochemical, agricultural, medical, and environmental sectors. NIR spectra are usually of very high dimensions and contain huge amounts of information. Most of the information is irrelevant to the target problem and some is simply noise. Thus, it is not an easy task to discover the relationship between NIR spectra and the predictive variable. However, this kind of regression analysis is one of the main topics of machine learning. Thus machine learning techniques play a key role in NIR based analytical approaches. Preprocessing of NIR spectral data has become an integral part of chemometrics modeling. The objective of the preprocessing is to remove physical phenomena (noise) in the spectra in order to improve the regression or classification model. In this work, we propose to reduce the noise using extreme learning machines which have shown good predictive performances in regression applications as well as in large dataset classification tasks. For this, we use a novel algorithm called CPLELM, which has an architecture in parallel based on a nonlinear layer in parallel with another nonlinear layer. Using the soft margin loss function concept, we incorporate two Lagrange multipliers with the objective of including the noise of spectral data. Six reallife dataset were analyzed to illustrate the performance of the developed models. The results for regression and classification problems confirm the advantages of using the proposed method in terms of root mean square error and accuracy.

