Abarzua, N., Pomareda, R., & Vega, O. (2018). Feet in orthogonal-Buekenhout-Metz unitals. Adv. Geom., 18(2), 229–236.
Abstract: Given an orthogonal-Buekenhout-Metz unital U-alpha,U-beta, embedded in PG(2, q(2)), and a point P is not an element of U-alpha,U-beta, we study the set tau(p)(U-alpha,U-beta) of feet of P in U-alpha,U-beta. We characterize geometrically each of these sets as either q + 1 collinear points or as q + 1 points partitioned into two arcs. Other results about the geometry of these sets are also given.
|
Aite, M., Chevallier, M., Frioux, C., Trottier, C., Got, J., Cortes, M. P., et al. (2018). Traceability, reproducibility and wiki-exploration for “a-la-carte” reconstructions of genome-scale metabolic models. PLoS Comput. Biol., 14(5), 25 pp.
Abstract: Genome-scale metabolic models have become the tool of choice for the global analysis of microorganism metabolism, and their reconstruction has attained high standards of quality and reliability. Improvements in this area have been accompanied by the development of some major platforms and databases, and an explosion of individual bioinformatics methods. Consequently, many recent model s result from “a la carte” pipelines, combining the use of platforms, individual tools and biological expertise to enhance the quality of the reconstruction. Although very useful, introducing heterogeneous tools, that hardly interact with each other, causes loss of traceability and reproducibility in the reconstruction process. This represents a real obstacle, especially when considering less studied species whose metabolic reconstruction can greatly benefit from the comparison to good quality models of related organisms. This work proposes an adaptable workspace, AuReMe, for sustainable reconstructions or improvements of genome-scale metabolic models involving personalized pipelines. At each step, relevant information related to the modifications brought to the model by a method is stored. This ensures that the process is reproducible and documented regardless of the combination of tools used. Additionally, the workspace establishes a way to browse metabolic models and their metadata through the automatic generation of ad-hoc local wikis dedicated to monitoring and facilitating the process of reconstruction. AuReMe supports exploration and semantic query based on RDF databases. We illustrate how this workspace allowed handling, in an integrated way, the metabolic reconstructions of non-model organisms such as an extremophile bacterium or eukaryote algae. Among relevant applications, the latter reconstruction led to putative evolutionary insights of a metabolic pathway.
|
Alejo, L., Atkinson, J., Guzman-Fierro, V., & Roeckel, M. (2018). Effluent composition prediction of a two-stage anaerobic digestion process: machine learning and stoichiometry techniques. Environ. Sci. Pollut. Res., 25(21), 21149–21163.
Abstract: Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes.
|
Araujo, J., Ducoffe, G., Nisse, N., & Suchan, K. (2018). On interval number in cycle convexity. Discret. Math. Theor. Comput. Sci., 20(1), 35 pp.
Abstract: Recently, Araujo et al. [Manuscript in preparation, 2017] introduced the notion of Cycle Convexity of graphs. In their seminal work, they studied the graph convexity parameter called hull number for this new graph convexity they proposed, and they presented some of its applications in Knot theory. Roughly, the tunnel number of a knot embedded in a plane is upper bounded by the hull number of a corresponding planar 4-regular graph in cycle convexity. In this paper, we go further in the study of this new graph convexity and we study the interval number of a graph in cycle convexity. This parameter is, alongside the hull number, one of the most studied parameters in the literature about graph convexities. Precisely, given a graph G, its interval number in cycle convexity, denoted by in(cc)(G), is the minimum cardinality of a set S subset of V (G) such that every vertex w is an element of E V (G) \ S has two distinct neighbors u, v is an element of S such that u and v lie in same connected component of G[S], i.e. the subgraph of G induced by the vertices in S. In this work, first we provide bounds on in(cc) (G) and its relations to other graph convexity parameters, and explore its behaviour on grids. Then, we present some hardness results by showing that deciding whetherin(cc) (G) <= k is NP-complete, even if G is a split graph or a bounded-degree planar graph, and that the problem is W[2]-hard in bipartite graphs when k is the parameter. As a consequence, we obtain that in(cc) (G) cannot be approximated up to a constant factor in the classes of split graphs and bipartite graphs (unless P = NP). On the positive side, we present polynomial-time algorithms to compute in(cc) (G) for outerplanar graphs, cobipartite graphs and interval graphs. We also present fixed-parameter tractable (FPT) algorithms to compute it for (q, q – 4)-graphs when q is the parameter and for general graphs G when parameterized either by the treewidth or the neighborhood diversity of G. Some of our hardness results and positive results are not known to hold for related graph convexities and domination problems. We hope that the design of our new reductions and polynomial-time algorithms can be helpful in order to advance in the study of related graph problems.
|
Araya-Letelier, G., Concha-Riedel, J., Antico, F. C., Valdes, C., & Caceres, G. (2018). Influence of natural fiber dosage and length on adobe mixes damage-mechanical behavior. Constr. Build. Mater., 174, 645–655.
Abstract: This study addresses the use of a natural fiber (pig hair), a massive food-industry waste, as reinforcement in adobe mixes (a specific type of earthen material). The relevance of this work resides in the fact that earthen materials are still widely used worldwide because of their low cost, availability, and low environmental impact. Results show that adobe mixes' mechanical-damage behavior is sensitive to both (i) fiber dosage and (ii) fiber length. Impact strength and flexural toughness are increased, whereas shrinkage distributed crack width is reduced. Average values of compressive and flexural strengths are reduced as fiber dosage and length increase, as a result of porosity generated by fiber clustering. Based on the results of this work a dosage of 0.5% by weight of dry soil using 7 mm fibers is optimal to improve crack control, flexural toughness and impact strength without statistically affecting flexural and compressive strengths. (C) 2018 Elsevier Ltd. All rights reserved.
|
Arevalo, I., Hernandez, R., Martin, M. J., & Vukotic, D. (2018). On weighted compositions preserving the Caratheodory class. Mon.heft. Math., 187(3), 459–477.
Abstract: We characterize in various ways the weighted composition transformations which preserve the class P of normalized analytic functions in the disk with positive real part. We analyze the meaning of the criteria obtained for various special cases of symbols and identify the fixed points of such transformations.
|
Arriagada, R., Aldunce, P., Blanco, G., Ibarra, C., Moraga, P., Nahuelhual, L., et al. (2018). Climate change governance in the anthropocene: emergence of polycentrism in Chile. Elementa-Sci. Anthrop., 6, 13 pp.
Abstract: Multilateral efforts are essential to an effective response to climate change, but individual nations define climate action policy by translating local and global objectives into adaptation and mitigation actions. We propose a conceptual framework to explore opportunities for polycentric climate governance, understanding polycentricity as a property that encompasses the potential for coordinating multiple centers of semiautonomous decision-making. We assert that polycentrism engages a diverse array of public and private actors for a more effective approach to reducing the threat of climate change. In this way, polycentrism may provide an appropriate strategy for addressing the many challenges of climate governance in the Anthropocene. We review two Chilean case studies: Chile's Nationally Determined Contribution on Climate Change and the Chilean National Climate Change Action Plan. Our examination demonstrates that Chile has included a diversity of actors and directed significant financial resources to both processes. The central government coordinated both of these processes, showing the key role of interventions at higher jurisdictional levels in orienting institutional change to improve strategic planning and better address climate change. Both processes also provide some evidence of knowledge co-production, while at the same time remaining primarily driven by state agencies and directed by technical experts. Efforts to overcome governance weaknesses should focus on further strengthening existing practices for climate change responses, establishing new institutions, and promoting decision-making that incorporates diverse social actors and multiple levels of governance. In particular, stronger inclusion of local level actors provides an opportunity to enhance polycentric modes of governance and improve climate change responses. Fully capitalizing on this opportunity requires establishing durable communication channels between different levels of governance.
|
Averbakh, I., & Pereira, J. (2018). Lateness Minimization in Pairwise Connectivity Restoration Problems. INFORMS J. Comput., 30(3), 522–538.
Abstract: A network is given whose edges need to be constructed (or restored after a disaster). The lengths of edges represent the required construction/restoration times given available resources, and one unit of length of the network can be constructed per unit of time. All points of the network are accessible for construction at any time. For each pair of vertices, a due date is given. It is required to find a construction schedule that minimizes the maximum lateness of all pairs of vertices, where the lateness of a pair is the difference between the time when the pair becomes connected by an already constructed path and the pair's due date. We introduce the problem and analyze its structural properties, present a mixed-integer linear programming formulation, develop a number of lower bounds that are integrated in a branch-and-bound algorithm, and discuss results of computational experiments both for instances based on randomly generated networks and for instances based on 2010 Chilean earthquake data.
|
Barros, M., Galea, M., Leiva, V., & Santos-Neto, M. (2018). Generalized Tobit models: diagnostics and application in econometrics. J. Appl. Stat., 45(1), 145–167.
Abstract: The standard Tobit model is constructed under the assumption of a normal distribution and has been widely applied in econometrics. Atypical/extreme data have a harmful effect on the maximum likelihood estimates of the standard Tobit model parameters. Then, we need to count with diagnostic tools to evaluate the effect of extreme data. If they are detected, we must have available a Tobit model that is robust to this type of data. The family of elliptically contoured distributions has the Laplace, logistic, normal and Student-t cases as some of its members. This family has been largely used for providing generalizations of models based on the normal distribution, with excellent practical results. In particular, because the Student-t distribution has an additional parameter, we can adjust the kurtosis of the data, providing robust estimates against extreme data. We propose a methodology based on a generalization of the standard Tobit model with errors following elliptical distributions. Diagnostics in the Tobit model with elliptical errors are developed. We derive residuals and global/local influence methods considering several perturbation schemes. This is important because different diagnostic methods can detect different atypical data. We implement the proposed methodology in an R package. We illustrate the methodology with real-world econometrical data by using the R package, which shows its potential applications. The Tobit model based on the Student-t distribution with a small quantity of degrees of freedom displays an excellent performance reducing the influence of extreme cases in the maximum likelihood estimates in the application presented. It provides new empirical evidence on the capabilities of the Student-t distribution for accommodation of atypical data.
|
Beghelli, A., & Prieto, P. (2018). Creativity under pressure: Using distant semantic fields for fast activation of divergent thinking in engineering students. In Proceedings of the DESIGN 2018 15th International Design Conference (Vol. 5, pp. 2391–2402).
|
Beltran, J. F., Nunez, E., Nunez, F., Silva, I., Bravo, T., & Moffat, R. (2018). Static response of asymmetrically damaged metallic strands: Experimental and numerical approach. Constr. Build. Mater., 192, 538–554.
Abstract: In this study, the effect of the presence of broken wires (damage) asymmetrically distributed on metallic strands surfaces on their static response is assessed. To this end, a general mechanical model for multi layered strands is presented, in which damaged strands are treated as a 1D nonlinear beam under uncoupled biaxial bending and axial load (NLBM). The NLBM is validated by comparisons with the results obtained from an experimental program especially designed for studying the effect of surface damage distribution on strands response and 3D nonlinear finite element simulations. Analyses are carried out on two strand constructions: 1 x 7 and 1 x 19, in which the damage levels and strand diameters vary from 5% to 40% and from 3.5 mm to 22.2 mm, respectively. Results indicate that the NLBM accurate predicts the static response (residual strength, stiffness, axial strain field, and deformed configuration) of the asymmetrically damaged strands, achieving good computational efficiency and numerical robustness. (C) 2018 Elsevier Ltd. All rights reserved.
|
Bergen, M., & Munoz, F. D. (2018). Quantifying the effects of uncertain climate and environmental policies on investments and carbon emissions: A case study of Chile. Energy Econ., 75, 261–273.
Abstract: In this article we quantify the effect of uncertainty of climate and environmental policies on transmission and generation investments, as well as on CO2 emissions in Chile. We use a two-stage stochastic planning model with recourse or corrective investment options to find optimal portfolios of infrastructure both under perfect information and uncertainty. Under a series of assumptions, this model is equivalent to the equilibrium of a much more complicated bi-level market model, where a transmission planner chooses investments first and generation firms invest afterwards. We find that optimal investment strategies present important differences depending on the policy scenario. By changing our assumption of how agents will react to this uncertainty we compute bounds on the cost that this uncertainty imposes on the system, which we estimate ranges between 3.2% and 5.7% of the minimum expected system cost of $57.6B depending on whether agents will consider or not uncertainty when choosing investments. We also find that, if agents choose investments using a stochastic planning model, uncertain climate policies can result in nearly 18% more CO2 emissions than the equilibrium levels observed under perfect information. Our results highlight the importance of credible and stable long-term regulations for investors in the electric power industry if the goal is to achieve climate and environmental targets in the most cost-effective manner and to minimize the risk of asset stranding. (C) 2018 Elsevier B.V. All rights reserved.
|
Bolte, J., Hochart, A., & Pauwels, E. (2018). Qualification Conditions In Semialgebraic Programming. SIAM J. Optim., 28(2), 1867–1891.
Abstract: For an arbitrary finite family of semialgebraic/definable functions, we consider the corresponding inequality constraint set and we study qualification conditions for perturbations of this set. In particular we prove that all positive diagonal perturbations, save perhaps a finite number of them, ensure that any point within the feasible set satisfies the Mangasarian-Fromovitz constraint qualification. Using the Milnor-Thom theorem, we provide a bound for the number of singular perturbations when the constraints are polynomial functions. Examples show that the order of magnitude of our exponential bound is relevant. Our perturbation approach provides a simple protocol to build sequences of “regular” problems approximating an arbitrary semialgebraic/definable problem. Applications to sequential quadratic programming methods and sum of squares relaxation are provided.
|
Borquez-Paredes, D., Beghelli, A., Leiva, A., & Murrugarra, R. (2018). Does fragmentation avoidance improve the performance of dynamic spectrum allocation in elastic optical networks? Photonic Netw. Commun., 35(3), 287–299.
Abstract: Most spectrum allocation algorithms in elastic optical networks apply a greedy approach: A new connection is allocated as long as there are enough spectrum slots to accommodate it. Recently, a different approach was proposed. Named Deadlock-Avoidance (DA), it only establishes a new connection if the portion of spectrum left after allocating it is zero (full-link utilization) or is big enough to accommodate future requests. Otherwise, the connection request is blocked as a way to avoid fragmentation. The performance of DA has been evaluated in a single-link scenario, where its performance is not affected by the slot continuity constraint. In this paper, we evaluate for the first time the blocking performance and fragmentation level of DA in a fully dynamic network scenario with different bitrates and number of slots for a single link, a 4-node bus and a mesh topology. The performance was evaluated by simulation, and a lower bound was also derived using a continuous Markov chain model. Results are obtained for DA and three greedy algorithms: First Fit, Exact Fit and First-Last Fit. Results show that DA significantly decreases fragmentation, and thus, it exhibits a much lower blocking due to fragmentation than the greedy algorithms. However, this decrease is compensated by a new type of blocking due to the selective acceptance of connections. As a result, the extra computational complexity of DA does not compensate a gain in performance.
|
Bravo, M., & Cominetti, R. (2018). Sharp convergence rates for averaged nonexpansive maps. Isr. J. Math., 227(1), 163–188.
Abstract: We establish sharp estimates for the convergence rate of the Kranosel'skiA-Mann fixed point iteration in general normed spaces, and we use them to show that the optimal constant of asymptotic regularity is exactly . To this end we consider a nested family of optimal transport problems that provide a recursive bound for the distance between the iterates. We show that these bounds are tight by building a nonexpansive map T: [0, 1](N) -> [0, 1](N) that attains them with equality, settling a conjecture by Baillon and Bruck. The recursive bounds are in turn reinterpreted as absorption probabilities for an underlying Markov chain which is used to establish the tightness of the constant 1/root pi.
|
Bustamante, M., Rienzo, A., Osorio, R., Lefranc, E., Duarte, M., Herrera, E., et al. (2018). Algorithm for Processing Mammography: Detection of Microcalcifications. IEEE Latin Am. Trans., 16(9), 2460–2466.
Abstract: A new algorithm based in Creme Filter, is proposed for breast cancer detection. The images obtained show micro calcifications with better contrast, allowing a better prognosis. The algorithm has only one parameter free, that permitting to observe texture when parameter is changed.
|
Caceres, E., Carrasco, M., & Rios, S. (2018). Evaluation of an eye-pointer interaction device for human-computer interaction. Heliyon, 4(3), e00574.
|
Canals, C., Goles, E., Mascareno, A., Rica, S., & Ruz, G. A. (2018). School Choice in a Market Environment: Individual versus Social Expectations. Complexity, 3793095, 11 pp.
Abstract: School choice is a key factor connecting personal preferences (beliefs, desires, and needs) and school offer in education markets. While it is assumed that preferences are highly individualistic forms of expectations by means of which parents select schools satisfying their internal moral standards, this paper argues that a better matching between parental preferences and school offer is achieved when individuals take into account their relevant network vicinity, thereby constructing social expectations regarding school choice. We develop two related models (individual expectations and social expectations) and prove that they are driven by a Lyapunov function, obtaining that both models converge to fixed points. Also, we assess their performance by conducting computational simulations. While the individual expectations model shows a probabilistic transition and a critical threshold below which preferences concentrate in a few schools and a significant amount of students is left unattended by the school offer, the social expectations model presents a smooth dynamics in which most of the schools have students all the time and no students are left out. We discuss our results considering key topics of the empirical research on school choice in educational market environments and conclude that social expectations contribute to improve information and lead to a better matching between school offer and parental preferences.
|
Canessa, E., Chaigneau, S., & Barra, C. (2018). Developing and calibrating an ABM of the property listing task. In Proceedings de la 32nd European Council for Modelling and Simulation, ECMS 2018 (Vol. 2018, pp. 13–19).
|
Caroca, R., Concha, P., Fierro, O., Rodriguez, E., & Salgado-Rebolledo, P. (2018). Generalized Chern-Simons higher-spin gravity theories in three dimensions. Nucl. Phys. B, 934, 240–264.
Abstract: The coupling of spin-3 gauge fields to three-dimensional Maxwell and AdS-Lorentz gravity theories is presented. After showing how the usual spin-3 extensions of the Ad S and the Poincare algebras in three dimensions can be obtained as expansions of sl (3, R) algebra, the procedure is generalized so as to define new higher-spin symmetries. Remarkably, the spin-3 extension of the Maxwell symmetry allows one to introduce a novel gravity model coupled to higher-spin topological matter with vanishing cosmological constant, which in turn corresponds to a flat limit of the AdS-Lorentz case. We extend our results to define two different families of higher-spin extensions of three-dimensional Einstein gravity. (C) 2018 The Authors. Published by Elsevier B.V.
|