Home | << 1 2 >> |
Aiyangar, A. K., Vivanco, J., Au, A. G., Anderson, P. A., Smith, E. L., & Ploeg, H. L. (2014). Dependence of Anisotropy of Human Lumbar Vertebral Trabecular Bone on Quantitative Computed Tomography-Based Apparent Density. J. Biomech. Eng.-Trans. ASME, 136(9), 10 pp.
Abstract: Most studies investigating human lumbar vertebral trabecular bone (HVTB) mechanical property-density relationships have presented results for the superior-inferior (SI), or “ on-axis” direction. Equivalent, directly measured data from mechanical testing in the transverse (TR) direction are sparse and quantitative computed tomography (QCT) density-dependent variations in the anisotropy ratio of HVTB have not been adequately studied. The current study aimed to investigate the dependence of HVTB mechanical anisotropy ratio on QCT density by quantifying the empirical relationships between QCT-based apparent density of HVTB and its apparent compressive mechanical propertieselastic modulus (E-app), yield strength (sigma(y)), and yield strain (epsilon(y))-in the SI and TR directions for future clinical QCT-based continuum finite element modeling of HVTB. A total of 51 cylindrical cores (33 axial and 18 transverse) were extracted from four L1 human lumbar cadaveric vertebrae. Intact vertebrae were scanned in a clinical resolution computed tomography (CT) scanner prior to specimen extraction to obtain QCT density, rho(CT). Additionally, physically measured apparent density, computed as ash weight over wet, bulk volume, rho(app), showed significant correlation with rho(CT) [rho(CT) = 1.0568 x rho(app), r = 0.86]. Specimens were compression tested at room temperature using the Zetos bone loading and bioreactor system. Apparent elastic modulus (E-app) and yield strength (sigma(y)) were linearly related to the rho(CT) in the axial direction [E-SI = 1493.8 x (rho(CT)), r = 0.77, p < 0.01; sigma(Y,SI) = 6.9 x (rho(CT)) = 0.13, r = 0.76, p < 0.01] while a power-law relation provided the best fit in the transverse direction [E-TR 3349.1 x (rho(CT))(1.94), r = 0.89, p < 0.01; sigma(Y,TR) 18.81 x (rho(CT)) 1.83, r = 0.83, p < 0.01]. No significant correlation was found between epsilon(y) and rho(CT) in either direction. E-app and sigma(y) in the axial direction were larger compared to the transverse direction by a factor of 3.2 and 2.3, respectively, on average. Furthermore, the degree of anisotropy decreased with increasing density. Comparatively, epsilon(y) exhibited only a mild, but statistically significant anisotropy: transverse strains were larger than those in the axial direction by 30%, on average. Ability to map apparent mechanical properties in the transverse direction, in addition to the axial direction, from CT-based densitometric measures allows incorporation of transverse properties in finite element models based on clinical CT data, partially offsetting the inability of continuum models to accurately represent trabecular architectural variations.
|
Alvarez-Miranda, E., Pereira, J., & Vila, M. (2023). Analysis of the simple assembly line balancing problem complexity. Comput. Oper. Res., 159, 106323.
Abstract: The simple assembly line balancing problem (SALBP) involves the determination of the assignment of elementary assembly operations to the workstations of the assembly line for the manufacture of a final product, with the objective of maximising assembly efficiency. In addition to its practicality, the SALBP can be considered as an extension of the bin packing problem (BPP) to account for the precedence relations between items. These constraints introduce an ordering component to the problem, which increases the complexity of SALBP resolution. However, previous studies indicated that precedence constraints do not play an important role in the capacity of state-of-the-art procedures to solve benchmark instances to optimality. In this study, we analysed the influences of different features of an SALBP instance on the performance of state-of-the-art solution methods for the abovementioned problem. First, we provide an alternative proof of complexity for the SALBP that uses precedence constraints to demonstrate its non-deterministic polynomial time (NP)-complete status, followed by a new set of benchmark instances directed towards an empirical analysis of the different features of SALBP instances. The experimental results revealed that the packing features of the SALBP are a major source of the perceived difficulty for any instance; however, precedence constraints play a role in the performance of these solution procedures. Specifically, the number of precedence constraints plays an important role in the results obtained from state-of-the-art methods. In addition to the analysis, certain issues that were identified in the publicly available implementations of the state-of-the-art method for resolving this problem were addressed in this study.
|
Barrera, J., Homem-De-Mello, T., Moreno, E., Pagnoncelli, B. K., & Canessa, G. (2016). Chance-constrained problems and rare events: an importance sampling approach. Math. Program., 157(1), 153–189.
Abstract: We study chance-constrained problems in which the constraints involve the probability of a rare event. We discuss the relevance of such problems and show that the existing sampling-based algorithms cannot be applied directly in this case, since they require an impractical number of samples to yield reasonable solutions. We argue that importance sampling (IS) techniques, combined with a Sample Average Approximation (SAA) approach, can be effectively used in such situations, provided that variance can be reduced uniformly with respect to the decision variables. We give sufficient conditions to obtain such uniform variance reduction, and prove asymptotic convergence of the combined SAA-IS approach. As it often happens with IS techniques, the practical performance of the proposed approach relies on exploiting the structure of the problem under study; in our case, we work with a telecommunications problem with Bernoulli input distributions, and show how variance can be reduced uniformly over a suitable approximation of the feasibility set by choosing proper parameters for the IS distributions. Although some of the results are specific to this problem, we are able to draw general insights that can be useful for other classes of problems. We present numerical results to illustrate our findings.
|
Bernales, A., Reus, L., & Valdenegro, V. (2022). Speculative bubbles under supply constraints, background risk and investment fraud in the art market. J. Corp. Financ., 77, 101746.
Abstract: We examine the unexplored effects on art markets of artist death (asset supply constraints), collectors' wealth (background risk) and forgery risk (risk of investment fraud), under short-sale constraints and risk aversion. Speculative bubbles emerge and have the form of an option strangle (a put option and a call option), in which strike prices are affected by art supply constraints and the association of the artworks' emotional value with both collectors' wealth and forgery, while the options' underlying asset is the stochastic heterogeneous beliefs of agents. We show that speculative bubbles increase with four elements: art supply constraints; a more negative correlation between collectors' wealth and the artworks' emotional value; a more positive relationship between forgery and the artworks' emotional value; and more heterogeneous beliefs. These four sources of speculation increase the expected turnover rate; however, they also augment the variance of speculative bubbles, which generates price discounts (i.e. risk premiums) for holding artworks. Consequently, the net impact of speculation is not necessarily increased art prices. This study not only contributes to the art market literature, but also to studies about speculative bubbles in other financial markets under heterogeneous beliefs, short-sale constraints and risk-averse investors, since we additionally consider the simultaneous effect of asset supply constraints, investors' background risk and the risk of investment fraud.
|
Bertossi, L. (2021). Specifying and computing causes for query answers in databases via database repairs and repair-programs. Knowl. Inf. Syst., 63, 199–231.
Abstract: There is a recently established correspondence between database tuples as causes for query answers in databases and tuple-based repairs of inconsistent databases with respect to denial constraints. In this work, answer-set programs that specify database repairs are used as a basis for solving computational and reasoning problems around causality in databases, including causal responsibility. Furthermore, causes are introduced also at the attribute level by appealing to an attribute-based repair semantics that uses null values. Corresponding repair-programs are introduced, and used as a basis for computation and reasoning about attribute-level causes. The answer-set programs are extended in order to capture causality under integrity constraints.
Keywords: Causality; Databases; Repairs; Constraints; Answer-set programming
|
Bertossi, L. (2022). Declarative Approaches to Counterfactual Explanations for Classification. Theory Pract. Log. Program., Early Access.
Abstract: We propose answer-set programs that specify and compute counterfactual interventions on entities that are input on a classification model. In relation to the outcome of the model, the resulting counterfactual entities serve as a basis for the definition and computation of causality-based explanation scores for the feature values in the entity under classification, namely responsibility scores. The approach and the programs can be applied with black-box models, and also with models that can be specified as logic programs, such as rule-based classifiers. The main focus of this study is on the specification and computation of best counterfactual entities, that is, those that lead to maximum responsibility scores. From them one can read off the explanations as maximum responsibility feature values in the original entity. We also extend the programs to bring into the picture semantic or domain knowledge. We show how the approach could be extended by means of probabilistic methods, and how the underlying probability distributions could be modified through the use of constraints. Several examples of programs written in the syntax of the DLV ASP-solver, and run with it, are shown.
|
Bolte, J., Hochart, A., & Pauwels, E. (2018). Qualification Conditions In Semialgebraic Programming. SIAM J. Optim., 28(2), 1867–1891.
Abstract: For an arbitrary finite family of semialgebraic/definable functions, we consider the corresponding inequality constraint set and we study qualification conditions for perturbations of this set. In particular we prove that all positive diagonal perturbations, save perhaps a finite number of them, ensure that any point within the feasible set satisfies the Mangasarian-Fromovitz constraint qualification. Using the Milnor-Thom theorem, we provide a bound for the number of singular perturbations when the constraints are polynomial functions. Examples show that the order of magnitude of our exponential bound is relevant. Our perturbation approach provides a simple protocol to build sequences of “regular” problems approximating an arbitrary semialgebraic/definable problem. Applications to sequential quadratic programming methods and sum of squares relaxation are provided.
|
Bustamante, M., & Contreras, M. (2016). Multi-asset Black-Scholes model as a variable second class constrained dynamical system. Physica A, 457, 540–572.
Abstract: In this paper, we study the multi-asset Black-Scholes model from a structural point of view. For this, we interpret the multi-asset Black-Scholes equation as a multidimensional Schrodinger one particle equation. The analysis of the classical Hamiltonian and Lagrangian mechanics associated with this quantum model implies that, in this system, the canonical momentums cannot always be written in terms of the velocities. This feature is a typical characteristic of the constrained system that appears in the high-energy physics. To study this model in the proper form, one must apply Dirac's method for constrained systems. The results of the Dirac's analysis indicate that in the correlation parameters space of the multi assets model, there exists a surface (called the Kummer surface Sigma(K), where the determinant of the correlation matrix is null) on which the constraint number can vary. We study in detail the cases with N = 2 and N = 3 assets. For these cases, we calculate the propagator of the multi-asset Black-Scholes equation and show that inside the Kummer Sigma(K) surface the propagator is well defined, but outside Sigma(K) the propagator diverges and the option price is not well defined. On Sigma(K) the propagator is obtained as a constrained path integral and their form depends on which region of the Kummer surface the correlation parameters lie. Thus, the multi-asset Black-Scholes model is an example of a variable constrained dynamical system, and it is a new and beautiful property that had not been previously observed. (C) 2016 Elsevier B.V. All rights reserved.
|
Canessa, G., Gallego, J. A., Ntaimo, L., & Pagnoncelli, B. K. (2019). An algorithm for binary linear chance-constrained problems using IIS. Comput. Optim. Appl., 72(3), 589–608.
Abstract: We propose an algorithm based on infeasible irreducible subsystems to solve binary linear chance-constrained problems with random technology matrix. By leveraging on the problem structure we are able to generate good quality upper bounds to the optimal value early in the algorithm, and the discrete domain is used to guide us efficiently in the search of solutions. We apply our methodology to individual and joint binary linear chance-constrained problems, demonstrating the ability of our approach to solve those problems. Extensive numerical experiments show that, in some cases, the number of nodes explored by our algorithm is drastically reduced when compared to a commercial solver.
|
Caniupan, M., Bravo, L., & Hurtado, C. A. (2012). Repairing inconsistent dimensions in data warehouses. Data Knowl. Eng., 79-80, 17–39.
Abstract: A dimension in a data warehouse (DW) is a set of elements connected by a hierarchical relationship. The elements are used to view summaries of data at different levels of abstraction. In order to support an efficient processing of such summaries, a dimension is usually required to satisfy different classes of integrity constraints. In scenarios where the constraints properly capture the semantics of the DW data, but they are not satisfied by the dimension, the problem of repairing (correcting) the dimension arises. In this paper, we study the problem of repairing a dimension in the context of two main classes of integrity constraints: strictness and covering constraints. We introduce the notion of minimal repair of a dimension: a new dimension that is consistent with respect to the set of integrity constraints, which is obtained by applying a minimal number of updates to the original dimension. We study the complexity of obtaining minimal repairs, and show how they can be characterized using Datalog programs with weak constraints under the stable model semantics. (c) 2012 Elsevier B.V. All rights reserved.
|
Carbonnel, C., Romero, M., & Zivny, S. (2020). Point-Width and Max-CSPs. ACM Trans. Algorithms, 16(4), 28 pp.
Abstract: The complexity of (unbounded-arity) Max-CSPs under structural restrictions is poorly understood. The two most general hypergraph properties known to ensure tractability of Max-CSPs, beta-acyclicity and bounded (incidence) MIM-width, are incomparable and lead to very different algorithms. We introduce the framework of point decompositions for hypergraphs and use it to derive a new sufficient condition for the tractability of (structurally restricted) Max-CSPs, which generalises both bounded MIM-width and beta-acyclicity. On the way, we give a new characterisation of bounded MIM-width and discuss other hypergraph properties which are relevant to the complexity of Max-CSPs, such as beta-hypertree-width.
|
Carbonnel, C., Romero, M., & Zivny, S. (2022). The Complexity of General-Valued Constraint Satisfaction Problems Seen from the Other Side. SIAM J. Comput., 51(1), 19–69.
Abstract: The constraint satisfaction problem (CSP) is concerned with homomorphisms between two structures. For CSPs with restricted left-hand-side structures, the results of Dalmau, Languages and Programming, Springer, New York, 2007, pp. 279--290] establish the precise borderline of polynomial-time solvability (subject to complexity-theoretic assumptions) and of solvability by bounded-consistency algorithms (unconditionally) as bounded treewidth modulo homomorphic equivalence. The general-valued constraint satisfaction problem (VCSP) is a generalization of the CSP concerned with homomorphisms between two valued structures. For VCSPs with restricted left-hand-side valued structures, we establish the precise borderline of polynomial-time solvability (subject to complexity-theoretic assumptions) and of solvability by the kth level of the Sherali--Adams LP hierarchy (unconditionally). We also obtain results on related problems concerned with finding a solution and recognizing the tractable cases; the latter has an application in database theory.
|
Cho, A. D., Carrasco, R. A., & Ruz, G. A. (2022). Improving Prescriptive Maintenance by Incorporating Post-Prognostic Information Through Chance Constraints. IEEE Access, 10, 55924–55932.
Abstract: Maintenance is one of the critical areas in operations in which a careful balance between preventive costs and the effect of failures is required. Thanks to the increasing data availability, decision-makers can now use models to better estimate, evaluate, and achieve this balance. This work presents a maintenance scheduling model which considers prognostic information provided by a predictive system. In particular, we developed a prescriptive maintenance system based on run-to-failure signal segmentation and a Long Short Term Memory (LSTM) neural network. The LSTM network returns the prediction of the remaining useful life when a fault is present in a component. We incorporate such predictions and their inherent errors in a decision support system based on a stochastic optimization model, incorporating them via chance constraints. These constraints control the number of failed components and consider the physical distance between them to reduce sparsity and minimize the total maintenance cost. We show that this approach can compute solutions for relatively large instances in reasonable computational time through experimental results. Furthermore, the decision-maker can identify the correct operating point depending on the balance between costs and failure probability.
|
Contreras, G. M. (2014). Stochastic volatility models at rho = +/- 1 as second class constrained Hamiltonian systems. Physica A, 405, 289–302.
Abstract: The stochastic volatility models used in the financial world are characterized, in the continuous-time case, by a set of two coupled stochastic differential equations for the underlying asset price S and volatility sigma. In addition, the correlations of the two Brownian movements that drive the stochastic dynamics are measured by the correlation parameter rho (-1 <= rho <= 1). This stochastic system is equivalent to the Fokker-Planck equation for the transition probability density of the random variables S and sigma. Solutions for the transition probability density of the Heston stochastic volatility model (Heston, 1993) were explored in Dragulescu and Yakovenko (2002), where the fundamental quantities such as the transition density itself, depend on rho in such a manner that these are divergent for the extreme limit rho = +/- 1. The same divergent behavior appears in Hagan et al. (2002), where the probability density of the SABR model was analyzed. In an option pricing context, the propagator of the bi-dimensional Black-Scholes equation was obtained in Lemmens et al. (2008) in terms of the path integrals, and in this case, the propagator diverges again for the extreme values rho = +/- 1. This paper shows that these similar divergent behaviors are due to a universal property of the stochastic volatility models in the continuum: all of them are second class constrained systems for the most extreme correlated limit rho = +/- 1. In this way, the stochastic dynamics of the rho = +/- 1 cases are different of the rho (1 <= rho <= 1) case, and it cannot be obtained as a continuous limit from the rho not equal +/- 1 regimen. This conclusion is achieved by considering the Fokker-Planck equation or the bi-dimensional Black-Scholes equation as a Euclidean quantum Schrodinger equation. Then, the analysis of the underlying classical mechanics of the quantum model, implies that stochastic volatility models at rho = +/- 1 correspond to a constrained system. To study the dynamics in an appropriate form, Dirac's method for constrained systems (Dirac, 1958, 1967) must be employed, and Dirac's analysis reveals that the constraints are second class. In order to obtain the transition probability density or the option price correctly, one must evaluate the propagator as a constrained Hamiltonian path-integral (Henneaux and Teitelboim, 1992), in a similar way to the high energy gauge theory models. In fact, for all stochastic volatility models, after integrating over momentum variables, one obtains an effective Euclidean Lagrangian path integral over the volatility alone. The role of the second class constraints is determining the underlying asset price S completely in terms of volatility, so it plays no role in the path integral. In order to examine the effect of the constraints on the dynamics for both extreme limits, the probability density function is evaluated by using semi-classical arguments, in an analogous manner to that developed in Hagan et al. (2002), for the SABR model. (C) 2014 Elsevier B.V. All rights reserved.
|
Contreras, M., & Hojman, S. A. (2014). Option pricing, stochastic volatility, singular dynamics and constrained path integrals. Physica A, 393, 391–403.
Abstract: Stochastic volatility models have been widely studied and used in the financial world. The Heston model (Heston, 1993) [7] is one of the best known models to deal with this issue. These stochastic volatility models are characterized by the fact that they explicitly depend on a correlation parameter p which relates the two Brownian motions that drive the stochastic dynamics associated to the volatility and the underlying asset. Solutions to the Heston model in the context of option pricing, using a path integral approach, are found in Lemmens et al. (2008) [21] while in Baaquie (2007,1997) [12,13] propagators for different stochastic volatility models are constructed. In all previous cases, the propagator is not defined for extreme cases rho = +/- 1. It is therefore necessary to obtain a solution for these extreme cases and also to understand the origin of the divergence of the propagator. In this paper we study in detail a general class of stochastic volatility models for extreme values rho = +/- 1 and show that in these two cases, the associated classical dynamics corresponds to a system with second class constraints, which must be dealt with using Dirac's method for constrained systems (Dirac, 1958,1967) [22,23] in order to properly obtain the propagator in the form of a Euclidean Hamiltonian path integral (Henneaux and Teitelboim, 1992) [25]. After integrating over momenta, one gets an Euclidean Lagrangian path integral without constraints, which in the case of the Heston model corresponds to a path integral of a repulsive radial harmonic oscillator. In all the cases studied, the price of the underlying asset is completely determined by one of the second class constraints in terms of volatility and plays no active role in the path integral. (C) 2013 Elsevier B.V. All rights reserved.
|
Contreras, M., & Pena, J. P. (2019). The quantum dark side of the optimal control theory. Physica A, 515, 450–473.
Abstract: In a recent article, a generic optimal control problem was studied from a physicist's point of view (Contreras et al. 2017). Through this optic, the Pontryagin equations are equivalent to the Hamilton equations of a classical constrained system. By quantizing this constrained system, using the right ordering of the operators, the corresponding quantum dynamics given by the Schrodinger equation is equivalent to that given by the Hamilton-Jacobi-Bellman equation of Bellman's theory. The conclusion drawn there were based on certain analogies between the equations of motion of both theories. In this paper, a closer and more detailed examination of the quantization problem is carried out, by considering three possible quantization procedures: right quantization, left quantization, and Feynman's path integral approach. The Bellman theory turns out to be the classical limit h -> 0 of these three different quantum theories. Also, the exact relation of the phase S(x, t) of the wave function Psi(x, t) = e(i/hS(x,t)) of the quantum theory with Bellman's cost function J(+)(x, t) is obtained. In fact, S(x, t) satisfies a 'conjugate' form of the Hamilton-Jacobi-Bellman equation, which implies that the cost functional J(+)(x, t) must necessarily satisfy the usual Hamilton-Jacobi-Bellman equation. Thus, the Bellman theory effectively corresponds to a quantum view of the optimal control problem. (C) 2018 Elsevier B.V. All rights reserved.
|
Contreras, M., Pellicer, R., & Villena, M. (2017). Dynamic optimization and its relation to classical and quantum constrained systems. Physica A, 479, 12–25.
Abstract: We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop lambda-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Psi (x, t) = e(is(x,t)) in the quantum Schrodinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x, t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem. (C) 2017 Elsevier B.V. All rights reserved.
|
Cortes, M. P., Mendoza, S. N., Travisany, D., Gaete, A., Siegel, A., Cambiazo, V., et al. (2017). Analysis of Piscirickettsia salmonis Metabolism Using Genome-Scale Reconstruction, Modeling, and Testing. Front. Microbiol., 8, 15 pp.
Abstract: Piscirickettsia salmonis is an intracellular bacterial fish pathogen that causes piscirickettsiosis, a disease with highly adverse impact in the Chilean salmon farming industry. The development of effective treatment and control methods for piscireckttsiosis is still a challenge. To meet it the number of studies on P. salmonis has grown in the last couple of years but many aspects of the pathogen's biology are still poorly understood. Studies on its metabolism are scarce and only recently a metabolic model for reference strain LF-89 was developed. We present a new genomescale model for P. salmonis LF-89 with more than twice as many genes as in the previous model and incorporating specific elements of the fish pathogen metabolism. Comparative analysis with models of different bacterial pathogens revealed a lower flexibility in P. salmonis metabolic network. Through constraint-based analysis, we determined essential metabolites required for its growth and showed that it can benefit from different carbon sources tested experimentally in new defined media. We also built an additional model for strain A1-15972, and together with an analysis of P. salmonis pangenome, we identified metabolic features that differentiate two main species clades. Both models constitute a knowledge-base for P. salmonis metabolism and can be used to guide the efficient culture of the pathogen and the identification of specific drug targets.
Keywords: pathogen; genome-scale; metabolism; constraint-based; Piscirickettsia; salmonis
|
Donoso, R. A., Ruiz, D., Garate-Castro, C., Villegas, P., Gonzalez-Pastor, J. E., de Lorenzo, V., et al. (2021). Identification of a self-sufficient cytochrome P450 monooxygenase from Cupriavidus pinatubonensis JMP134 involved in 2-hydroxyphenylacetic acid catabolism, via homogentisate pathway. Microb. Biotechnol., 14(5), 1944–1960.
Abstract: The self-sufficient cytochrome P450 RhF and its homologues belonging to the CYP116B subfamily have attracted considerable attention due to the potential for biotechnological applications based in their ability to catalyse an array of challenging oxidative reactions without requiring additional protein partners. In this work, we showed for the first time that a CYP116B self-sufficient cytochrome P450 encoded by the ohpA gene harboured by Cupriavidus pinatubonensis JMP134, a beta-proteobacterium model for biodegradative pathways, catalyses the conversion of 2-hydroxyphenylacetic acid (2-HPA) into homogentisate. Mutational analysis and HPLC metabolite detection in strain JMP134 showed that 2-HPA is degraded through the well-known homogentisate pathway requiring a 2-HPA 5-hydroxylase activity provided by OhpA, which was additionally supported by heterologous expression and enzyme assays. The ohpA gene belongs to an operon including also ohpT, coding for a substrate-binding subunit of a putative transporter, whose expression is driven by an inducible promoter responsive to 2-HPA in presence of a predicted OhpR transcriptional regulator. OhpA homologues can be found in several genera belonging to Actinobacteria and alpha-, beta- and gamma-proteobacteria lineages indicating a widespread distribution of 2-HPA catabolism via homogentisate route. These results provide first time evidence for the natural function of members of the CYP116B self-sufficient oxygenases and represent a significant input to support novel kinetic and structural studies to develop cytochrome P450-based biocatalytic processes.
Keywords: COMPLETE GENOME SEQUENCE; ELECTRON-TRANSFER; GENE; DEGRADATION; SYSTEM; STRAIN; P450BM3; GROWTH; DOMAIN; HYDROXYLATION
|
Espinoza, D., Goycoolea, M., & Moreno, E. (2015). The precedence constrained knapsack problem: Separating maximally violated inequalities. Discret Appl. Math., 194, 65–80.
Abstract: We consider the problem of separating maximally violated inequalities for the precedence constrained knapsack problem. Though we consider maximally violated constraints in a very general way, special emphasis is placed on induced cover inequalities and induced clique inequalities. Our contributions include a new partial characterization of maximally violated inequalities, a new safe shrinking technique, and new insights on strengthening and lifting. This work follows on the work of Boyd (1993), Park and Park (1997), van de Leensel et al. (1999) and Boland et al. (2011). Computational experiments show that our new techniques and insights can be used to significantly improve the performance of cutting plane algorithms for this problem. (C) 2015 Elsevier B.V. All rights reserved.
|