Home  << 1 2 3 >> 
Arbelaez, H., Bravo, V., Hernandez, R., Sierra, W., & Venegas, O. (2020). A new approach for the univalence of certain integral of harmonic mappings. Indag. Math.New Ser., 31(4), 525–535.
Abstract: The principal goal of this paper is to extend the classical problem of finding the values of alpha is an element of C for which either (f) over cap (alpha) (z) = integral(z)(0) (f (zeta)/zeta)(alpha) d zeta or f(alpha) (z) = integral(z)(0)(f' (zeta))(alpha)d zeta are univalent, whenever f belongs to some subclasses of univalent mappings in D, to the case of harmonic mappings, by considering the shear construction introduced by Clunie and SheilSmall in [4]. (C) 2020 Royal Dutch Mathematical Society (KWG). Published by Elsevier B.V. All rights reserved.

Argiz, L., Reyes, C., Belmonte, M., Franchi, O., Campo, R., FraVazquez, A., et al. (2020). Assessment of a fast method to predict the biochemical methane potential based on biodegradable COD obtained by fractionation respirometric tests. J. Environ. Manage., 269, 9 pp.
Abstract: The biochemical methane potential test (BMP) is the most common analytical technique to predict the performance of anaerobic digesters. However, this assay is timeconsuming (from 20 to over than 100 days) and consequently impractical when it is necessary to obtain a quick result. Several methods are available for faster BMP prediction but, unfortunately, there is still a lack of a clear alternative. Current aerobic tests underestimate the BMP of substrates since they only detect the easily biodegradable COD. In this context, the potential of COD fractionation respirometric assays, which allow the determination of the particulate slowly biodegradable fraction, was evaluated here as an alternative to early predict the BMP of substrates. Seven different origin waste streams were tested and the anaerobically biodegraded organic matter (CODmet) was compared with the different COD fractions. When considering adapted microorganisms, the appropriate operational conditions and the required biodegradation time, the differences between the CODmet, determined through BMP tests, and the biodegradable COD (CODb) obtained by respirometry, were not significant (CODmet (57.8026 +/ 21.2875) and CODb (55.6491 +/ 21.3417), t (5) = 0.189, p = 0.853). Therefore, results suggest that the BMP of a substrate might be early predicted from its CODb in only few hours. This methodology was validated by the performance of an interlaboratory studyconsidering four additional substrates.

Aylwin, R., JerezHanckes, C., Schwab, C., & Zech, J. (2020). Domain Uncertainty Quantification in Computational Electromagnetics. SIAMASA J. Uncertain. Quantif., 8(1), 301–341.
Abstract: We study the numerical approximation of timeharmonic, electromagnetic fields inside a lossy cavity of uncertain geometry. Key assumptions are a possibly highdimensional parametrization of the uncertain geometry along with a suitable transformation to a fixed, nominal domain. This uncertainty parametrization results in families of countably parametric, Maxwelllike cavity problems that are posed in a single domain, with inhomogeneous coefficients that possess finite, possibly low spatial regularity, but exhibit holomorphic parametric dependence in the differential operator. Our computational scheme is composed of a sparse grid interpolation in the highdimensional parameter domain and an Hcurl conforming edge element discretization of the parametric problem in the nominal domain. As a steppingstone in the analysis, we derive a novel Strangtype lemma for Maxwelllike problems in the nominal domain, which is of independent interest. Moreover, we accommodate arbitrary small Sobolev regularity of the electric field and also cover uncertain isotropic constitutive or material laws. The shape holomorphy and edgeelement consistency error analysis for the nominal problem are shown to imply convergence rates for multilevel Monte Carlo and for quasiMonte Carlo integration, as well as sparse grid approximations, in uncertainty quantification for computational electromagnetics. They also imply expression rate estimates for deep ReLU networks of shapetosolution maps in this setting. Finally, our computational experiments confirm the presented theoretical results.

Aylwin, R., SilvaOelker, G., JerezHanckes, C., & Fay, P. (2020). Optimization methods for achieving high diffraction efficiency with perfect electric conducting gratings. J. Opt. Soc. Am. AOpt. Image Sci. Vis., 37(8), 1316–1326.
Abstract: This work presents the implementation, numerical examples, and experimental convergence study of first and secondorder optimization methods applied to onedimensional periodic gratings. Through boundary integral equations and shape derivatives, the profile of a grating is optimized such that it maximizes the diffraction efficiency for given diffraction modes for transverse electric polarization. We provide a thorough comparison of three different optimization methods: a firstorder method (gradient descent); a secondorder approach based on a Newton iteration, where the usual Newton step is replaced by taking the absolute value of the eigenvalues given by the spectral decomposition of the Hessian matrix to deal with nonconvexity; and the BroydenFletcherGoldfarbShanno (BFGS) algorithm, a quasiNewton method. Numerical examples are provided to validate our claims. Moreover, two grating profiles are designed for high efficiency in the Littrow configuration and then compared to a high efficiency commercial grating. Conclusions and recommendations, derived from the numerical experiments, are provided as well as future research avenues. (C) 2020 Optical Society of America

Barrera, J., & Lagos, G. Limit distributions of the upper order statistics for the Levyfrailty MarshallOlkin distribution. Extremes, , 26 pp.
Abstract: The MarshallOlkin (MO) distribution is considered a key model in reliability theory and in risk analysis, where it is used to model the lifetimes of dependent components or entities of a system and dependency is induced by “shocks” that hit one or more components at a time. Of particular interest is the Levyfrailty subfamily of the MarshallOlkin (LFMO) distribution, since it has few parameters and because the nontrivial dependency structure is driven by an underlying Levy subordinator process. The main contribution of this work is that we derive the precise asymptotic behavior of the upper order statistics of the LFMO distribution. More specifically, we consider a sequence ofnunivariate random variables jointly distributed as a multivariate LFMO distribution and analyze the order statistics of the sequence asngrows. Our main result states that if the underlying Levy subordinator is in the normal domain of attraction of a stable distribution with index of stability alpha then, after certain logarithmic centering and scaling, the upper order statistics converge in distribution to a stable distribution if alpha> 1 or a simple transformation of it if alpha <= 1. Our result can also give easily computable confidence intervals for the last failure times, provided that a proper convergence analysis is carried out first.

Barrera, J., & Lagos, G. (2020). Limit distributions of the upper order statistics for the Levyfrailty MarshallOlkin distribution. Extremes, to appear, 26 pp.
Abstract: The MarshallOlkin (MO) distribution is considered a key model in reliability theory and in risk analysis, where it is used to model the lifetimes of dependent components or entities of a system and dependency is induced by “shocks” that hit one or more components at a time. Of particular interest is the Levyfrailty subfamily of the MarshallOlkin (LFMO) distribution, since it has few parameters and because the nontrivial dependency structure is driven by an underlying Levy subordinator process. The main contribution of this work is that we derive the precise asymptotic behavior of the upper order statistics of the LFMO distribution. More specifically, we consider a sequence ofnunivariate random variables jointly distributed as a multivariate LFMO distribution and analyze the order statistics of the sequence asngrows. Our main result states that if the underlying Levy subordinator is in the normal domain of attraction of a stable distribution with index of stability alpha then, after certain logarithmic centering and scaling, the upper order statistics converge in distribution to a stable distribution if alpha> 1 or a simple transformation of it if alpha <= 1. Our result can also give easily computable confidence intervals for the last failure times, provided that a proper convergence analysis is carried out first.

Barrera, J., Carrasco, R. A., & Moreno, E. (2020). Realtime fleet management decision support system with security constraints. TOP, 28(3), 728–748.
Abstract: Intelligent transportation, and in particular, fleet management, has been a forefront concern for a plethora of industries. This statement is especially true for the production of commodities, where transportation represents a central element for operational continuity. Additionally, in many industries, and in particular those with hazardous environments, fleet control must satisfy a wide range of security restrictions to ensure that risks are kept at bay and accidents are minimum. Furthermore, in these environments, any decision support tool must cope with noisy and incomplete data and give recommendations every few minutes. In this work, a fast and efficient decision support tool is presented to help fleet managers oversee and control ore trucks, in a mining setting. The main objective of this system is to help managers avoid interactions between ore trucks and personnel buses, one of the most critical security constraints in our case study, keeping a minimum security distance between the two at all times. Furthermore, additional algorithms are developed and implemented, so that this approach can work with reallife noisy GPS data. Through the use of historical data, the performance of this decision support system is studied, validating that it works under the reallife conditions presented by the company. The experimental results show that the proposed approach improved truck and road utilization significantly while allowing the fleet manager to control the security distance required by their procedures.

Baselli, G., Contreras, F., Lillo, M., Marin, M., & Carrasco, R. A. (2020). Optimal decisions for salvage logging after wildfires. OmegaInt. J. Manage. Sci., 96, 9 pp.
Abstract: Strategic, tactical, and operational harvesting plans for the forestry and logging industry have been widely studied for more than 60 years. Many different settings and specific constraints due to legal, environmental, and operational requirements have been modeled, improving the performance of the harvesting process significantly. During the summer of 2017, Chile suffered from the most massive wildfires in its history, affecting almost half a million hectares, of which nearly half were forests owned by medium and small forestry companies. Some of the stands were burned by intense crown fires, which always spread fast, that burned the foliage and outer layer of the bark but left standing dead trees that could be salvage harvested before insect and decay processes rendered them unusable for commercial purposes. Unlike the typical operational programming models studied in the past, in this setting, companies can make insurance claims on part or all of the burnt forest, whereas the rest of the forest needs to be harvested before it loses its value. This problem is known as the salvage logging problem. The issue also has an important social component when considering medium and small forestry and logging companies: most of their personnel come from local communities, which have already been affected by the fires. Harvesting part of the remaining forest can allow them to keep their jobs longer and, hopefully, leave the company in a better financial situation if the harvesting areas are correctly selected. In this work, we present a novel mixedinteger optimizationbased approach to support salvage logging decisions, which helps in the configuration of an operationallevel harvesting and workforce assignment plan. Our model takes into account the payment from an insurance claim as well as future income from harvesting the remaining trees. The model also computes an optimal assignment of personnel to the different activities required. The objective is to improve the cash position of the company by the end of the harvest and ensure that the company is paying all its liabilities and maintaining personnel. We show how our model performs compared to the current decisions made by medium and smallsized forestry companies, and we study the specific case of a small forestry company located in Cauquenes, Chile, which used our model to decide its course of action. (C) 2019 Elsevier Ltd. All rights reserved.
Keywords: Salvage logging; Forest harvesting; Wildfires; Workforce allocation

Becker, F., Montealecre, P., Rapaport, I., & Todinca, I. (2020). The Impact Of Locality In The Broadcast Congested Clique Model. SIAM Discret. Math., 34(1), 682–700.
Abstract: The broadcast congested clique model (BCLIQUE) is a messagepassing model of distributed computation where n nodes communicate with each other in synchronous rounds. First, in this paper we prove that there is a oneround, deterministic algorithm that reconstructs the input graph G if the graph is ddegenerate, and rejects otherwise, using bandwidth b = O(d . log n). Then, we introduce a new parameter to the model. We study the situation where the nodes, initially, instead of knowing their immediate neighbors, know their neighborhood up to a fixed radius r. In this new framework, denoted BCLIQuE[r], we study the problem of detecting, in G, an induced cycle of length at most k (CYCLE <= k) and the problem of detecting an induced cycle of length at least k +1 (CYCLE>k). We give upper and lower bounds. We show that if each node is allowed to see up to distance r = left perpendicular k/2 right perpendicular + 1, then a polylogarithmic bandwidth is sufficient for solving CYCLE>k with only two rounds. Nevertheless, if nodes were allowed to see up to distance r = left perpendicular k/3 right perpendicular, then any oneround algorithm that solves CYCLE>k needs the bandwidth b to be at least Omega(n/ log n). We also show the existence of a oneround, deterministic BCLIQUE algorithm that solves CYCLE <= k with bandwitdh b = O(n(1/left perpendicular k/2 right perpendicular). log n). On the negative side, we prove that, if epsilon <= 1/3 and 0 < r <= k/4, then any epsilonerror, Rround, bbandwidth algorithm in the BCLIQUE[r] model that solves problem CYCLE(<= k )satisfies R . b = Omega(n(1/left perpendicular k/2 right perpendicular)).
Keywords: broadcast congested clique; induced cycles; graph degeneracy

Calderon, F. I., Lozada, A., BorquezParedes, D., Olivares, R., Davalos, E. J., Saavedra, G., et al. (2020). BERAdaptive RMLSA Algorithm for WideArea Flexible Optical Networks. IEEE Access, 8, 128018–128031.
Abstract: Widearea optical networks face significant transmission challenges due to the relentless growth of bandwidth demands experienced nowadays. Network operators must consider the relationship between modulation format and maximum reach for each connection request due to the accumulation of physical layer impairments in optical fiber links, to guarantee a minimum quality of service (QoS) and quality of transmission (QoT) to all connection requests. In this work, we present a BERadaptive solution to solve the routing, modulation format, and spectrum assignment (RMLSA) problem for widearea elastic optical networks. Our main goal is to maximize successful connection requests in widearea networks while choosing modulation formats with the highest efficiency possible. Consequently, our technique uses an adaptive biterrorrate (BER) threshold to achieve communication with the best QoT in the most efficient manner, using the strictest BER value and the modulation format with the smallest bandwidth possible. Additionally, the proposed algorithm relies on 3R regeneration devices to enable longdistances communications if transparent communication cannot be achieved. We assessed our method through simulations for various network conditions, such as the number of regenerators per node, traffic load per user, and BER threshold values. In a scenario without regenerators, the BERAdaptive algorithm performs similarly to the most relaxed fixed BER threshold studied in blocking probability. However, it ensures a higher QoT to most of the connection requests. The proposed algorithm thrives with the use of regenerators, showing the best performance among the studied solutions, enabling longdistance communications with a high QoT and low blocking probability.

Cando, M. A., Hube, M. A., Parra, P. F., & Arteta, C. A. (2020). Effect of stiffness on the seismic performance of code conforming reinforced concrete shear wall buildings. Eng. Struct., 219, 14 pp.
Abstract: This study assesses the effect of the stiffness on the seismic performance of residential shear wall buildings designed according to current Chilean regulations, including DS60 and DS61. Specifically, the paper focuses on the effect of stiffness on the building overstrength, displacement ductility, fragility for Life Safety (LS) and collapse limit states, as well as the probability of achieving these two limits states in 50 years. The seismic performance is assessed for a group of four 20 story residential shear wall buildings archetypes located in Santiago. Walls were modeled using the multiple vertical line element model (MVLEM) with inelastic hysteretic materials for the vertical elements, and a linear elastic shear behavior. Pushover analyses were considered to estimate the buildings overstrength and displacement ductility, while incremental dynamic analyses were per formed to estimate fragility curves. A probabilistic seismic hazard analysis, which considered the seismicity of Chile central zone, was performed to estimate the probability of achieving the two limits states in 50 years. The results show that an increase in the stiffness reduces the chance of exceeding the LS and collapse limit states for the same intensity level. Additionally, the probabilistic seismic hazard analysis shows that, when the stiffness increases, the probability of reaching the LS limit state in 50 years also decreases. Counterintuitively, the probability of collapse in 50 years increases as the stiffness increases, due to the considered seismic hazard and the design requirements. Since society is moving towards resilient structural designs that minimize damage, disruption and economic losses, it is concluded that the performance of reinforced concrete shear wall buildings is improved by increasing the stiffness.
Keywords: Reinforced concrete; Shear wall; Building; Collapse; Life safety; Stiffness; Fragility; Risk

Canessa, E., & Chaigneau, S. E. (2020). Mathematical regularities of data from the property listing task. J. Math. Psychol., 97, 19 pp.
Abstract: To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list “is a pet”, “has four legs”, etc.), which are then coded into property types (i.e., superficially dissimilar properties such as “has four legs” and “is a quadruped” may be coded as “four legs”). When the PLT is done for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, there is a lack of a formal model of the PLT, which would provide better analysis tools. Particularly, nobody has attempted analyzing the PLT's listing process. Thus, in the current work we develop a mathematical description of the PLT. Our analyses indicate that several regularities should be found in the observable data obtained from a PLT. Using data from three different CPNs (from 3 countries and 2 different languages), we show that these regularities do in fact exist and generalize well across different CPNs. Overall, our results suggest that the description of the regularities found in PLT data may be fruitfully used in the study of concepts. (C) 2020 Elsevier Inc. All rights reserved.

Canessa, E., Chaigneau, S. E., Lagos, R., & Medina, F. A. (2020). How to carry out conceptual properties norming studies as parameter estimation studies: Lessons from ecology. Behav. Res. Methods, to appear, 17 pp.
Abstract: Conceptual properties norming studies (CPNs) ask participants to produce properties that describe concepts. From that data, different metrics may be computed (e.g., semantic richness, similarity measures), which are then used in studying concepts and as a source of carefully controlled stimuli for experimentation. Notwithstanding those metrics' demonstrated usefulness, researchers have customarily overlooked that they are only point estimates of the true unknown population values, and therefore, only rough approximations. Thus, though research based on CPN data may produce reliable results, those results are likely to be general and coarsegrained. In contrast, we suggest viewing CPNs as parameter estimation procedures, where researchers obtain only estimates of the unknown population parameters. Thus, more specific and finegrained analyses must consider those parameters' variability. To this end, we introduce a probabilistic model from the field of ecology. Its related statistical expressions can be applied to compute estimates of CPNs' parameters and their corresponding variances. Furthermore, those expressions can be used to guide the sampling process. The traditional practice in CPN studies is to use the same number of participants across concepts, intuitively believing that practice will render the computed metrics comparable across concepts and CPNs. In contrast, the current work shows why an equal number of participants per concept is generally not desirable. Using CPN data, we show how to use the equations and discuss how they may allow more reasonable analyses and comparisons of parameter values among different concepts in a CPN, and across different CPNs.

Canessa, E., Chaigneau, S. E., Moreno, S., & Lagos, R. (2020). Informational content of cosine and other similarities calculated from highdimensional Conceptual Property Norm data. Cogn. Process., to appear, 14 pp.
Abstract: To study concepts that are coded in language, researchers often collect lists of conceptual properties produced by human subjects. From these data, different measures can be computed. In particular, interconcept similarity is an important variable used in experimental studies. Among possible similarity measures, the cosine of conceptual property frequency vectors seems to be a de facto standard. However, there is a lack of comparative studies that test the merit of different similarity measures when computed from property frequency data. The current work compares four different similarity measures (cosine, correlation, Euclidean and Chebyshev) and five different types of data structures. To that end, we compared the informational content (i.e., entropy) delivered by each of those 4 x 5 = 20 combinations, and used a clustering procedure as a concrete example of how informational content affects statistical analyses. Our results lead us to conclude that similarity measures computed from lowerdimensional data fare better than those calculated from higherdimensional data, and suggest that researchers should be more aware of data sparseness and dimensionality, and their consequences for statistical analyses.

Canessa, G., Moreno, E., & Pagnoncelli, B. K. (2020). The riskaverse ultimate pit problem. Optim. Eng., to appear, 24 pp.
Abstract: In this work, we consider a riskaverse ultimate pit problem where the grade of the mineral is uncertain. We derive conditions under which we can generate a set of nested pits by varying the risk level instead of using revenue factors. We propose two properties that we believe are desirable for the problem: risk nestedness, which means the pits generated for different risk aversion levels should be contained in one another, and additive consistency, which states that preferences in terms of order of extraction should not change if independent sectors of the mine are added as precedences. We show that only an entropic risk measure satisfies these properties and propose a twostage stochastic programming formulation of the problem, including an efficient approximation scheme to solve it. We illustrate our approach in a small selfconstructed example, and apply our approximation scheme to a realworld section of the Andina mine, in Chile.
Keywords: Ultimate pit; Mining; Riskaverse optimization; Integer programming

Carmichael, T. W., Quinn, S. N., Mustill, A. J., Huang, C., Zhou, G., Persson, C. M., et al. (2020). Two Intermediatemass Transiting Brown Dwarfs from the TESS Mission. Astron. J., 160(1), 15 pp.
Abstract: We report the discovery of two intermediatemass transiting brown dwarfs (BDs), TOI569b and TOI1406b, from NASA's Transiting Exoplanet Survey Satellite mission. TOI569b has an orbital period of P = 6.55604 0.00016 days, a mass of Mb = 64.1 1.9 , and a radius of Rb = 0.75 0.02 . Its host star, TOI569, has a mass of Mstar = 1.21 0.05, a radius of Rstar = 1.47 0.03 dex, and an effective temperature of Teff = 5768 110 K. TOI1406b has an orbital period of P = 10.57415 0.00063 days, a mass of Mb = 46.0 2.7 , and a radius of Rb = 0.86 0.03 . The host star for this BD has a mass of Mstar = 1.18 0.09 a radius of Rstar = 1.35 0.03 dex, and an effective temperature of Teff = 6290 100 K. Both BDs are in circular orbits around their host stars and are older than 3 Gyr based on stellar isochrone models of the stars. TOI569 is one of two slightly evolved stars known to host a transiting BD (the other being KOI415). TOI1406b is one of three known transiting BDs to occupy the mass range of 4050 and one of two to have a circular orbit at a period near 10 days (with the first being KOI205b). Both BDs have reliable ages from stellar isochrones, in addition to their wellconstrained masses and radii, making them particularly valuable as tests for substellar isochrones in the BD massradius diagram.

Cho, A. D., Carrasco, R. A., Ruz, G. A., & Ortiz, J. L. (2020). Slow Degradation Fault Detection in a Harsh Environment. IEEE Access, to appear.
Abstract: The ever increasing challenges posed by the science projects in astronomy have skyrocketed the complexity of the new generation telescopes. Due to the climate and sky requirements, these high precision instruments are generally located in remote areas, suffering from the harsh environments around it. These modern telescopes not only produce massive amounts of scientific data, but they also generate an enormous amount of operational information. The Atacama Large Millimeter/submillimeter Array (ALMA) is one of these unique instruments, generating more than 50 Gb of operational data every day while functioning in conditions of extreme dryness and altitude. To maintain the array working under extreme conditions, the engineering teams must check over 130,000 monitoring points, combing through the massive datasets produced every day. To make this possible, predictive tools are needed to identify, hopefully beforehand, the occurrence of failures in all the different subsystems.
This work presents a novel fault detection scheme for one of these subsystems, the Intermediate Frequency Processors (IFP). This subsystem is critical to process the information gathered by each antenna and communicate it, reliably, to the correlator for processing. Our approach is based on echo state networks, a configuration of artificial neural networks, used to learn and predict the signal patterns. These patterns are later compared to the actual signal, to identify failure modes. Additional preprocessing techniques were also added since the signaltonoise ratio of the data used was very low. The proposed scheme was tested in over seven years of data from 132 IFPs at ALMA, showing an accuracy of over 70%. Furthermore, the detection was done several months earlier, on average, when compared to what human operators did. These results help the maintenance procedures, increasing reliability while reducing humans' exposure to the harsh environment where the antennas are. Although applied to a specific fault, this technique is broad enough to be applied to other types of faults and settings. 
Comisso, L., & Asenjo, F. A. (2020). Generalized magnetofluid connections in a curved spacetime. Phys. Rev. D, 102(2), 8 pp.
Abstract: The ideal magnetohydrodynamic theorem on the conservation of the magnetic connections between plasma elements is extended to nonideal relativistic plasmas in curved spacetime. The existence of generalized magnetofluid connections that are preserved by the plasma dynamics is formalized by means of a covariant connection equation that includes different nonideal effects. These generalized connections are constituted by 2dimensional hypersurfaces, which are linked to an antisymmetric tensor field that unifies the electromagnetic and fluid fields. They can be interpreted in terms of generalized magnetofluid vorticity field lines by considering a 3 + 1 foliation of spacetime and a time resetting projection that compensates for the loss of simultaneity between spatially separated events. The worldshects of the generalized magnetofluid vorticity field lines play a fundamental role in the plasma dynamics by prohibiting evolutions that do not preserve the magnetofluid connectivity.

Crutchik, D., Franchi, O., Caminos, L., Jeison, D., Belmonte, M., Pedrouso, A., et al. (2020). Polyhydroxyalkanoates (PHAs) Production: A Feasible Economic Option for the Treatment of Sewage Sludge in Municipal Wastewater Treatment Plants? Water, 12(4), 12 pp.
Abstract: Sludge is a byproduct of municipal wastewater treatment plants (WWTPs) and its management contributes significantly to the operating costs. Large WWTPs usually have anaerobic sludge digesters to valorize sludge as methane and to reduce its mass. However, the low methane market price opens the possibility for generating other high valueadded products from the organic matter in sludge, such as polyhydroxyalkanoates (PHAs). In this work, the economic feasibility of retrofitting two types of WWTPs to convert them into biofactories of crude PHAs was studied. Two cases were analyzed: (a) a large WWTP with anaerobic sludge digestion; and (b) a small WWTP where sludge is only dewatered. In a twostage PHAproduction system (biomass enrichment plus PHAs accumulation), the minimum PHAs cost would be 1.26 and 2.26 US$/kg PHAcrude for the large and small WWTPs, respectively. In a singlestage process, where a fraction of the secondary sludge (25%) is directly used to accumulate PHAs, the production costs would decrease by around 15.9% (small WWTPs) and 19.0% (large WWTPs), since capital costs associated with bioreactors decrease. Sensitivity analysis showed that the PHA/COD (Chemical Oxygen Demand) yield is the most crucial parameter affecting the production costs. The energy, methane, and sludge management prices also have an essential effect on the production costs, and their effect depends on the WWTP's size.

Diaz, C., Belmonte, M., Campos, J. L., Franchi, O., Faundez, M., Vidal, G., et al. (2020). Limits of the anammox process in granular systems to remove nitrogen at low temperature and nitrogen concentration. Process Saf. Environ. Protect., 138, 349–355.
Abstract: When partial nitritationanammox (PNAMX) processes are applied to treat the mainstream in wastewater treatment plants (WWTPs), it is difficult to fulfil the total nitrogen (TN) quality requirements established by the European Union (<10g TN/m(3)). The operation of the anammox process was evaluated here in a continuous stirred tank reactor operated at 15 degrees C and fed with concentrations of 50 g TN/m(3) (1.30 +/ 0.23 g NO2 N/g NH4+N). Two different aspects were identified as crucial, limiting nitrogen removal efficiency. On the one hand, the oxygen transferred from the air in contact with the mixed liquor surface favoured the nitrite oxidation to nitrate (up to 75 %) and this nitrate, in addition to the amount produced from the anammox reaction itself, worsened the effluent quality. On the other hand, the mass transfer of ammonium and nitrite to be converted inside the anammox granules involves relatively large values of apparent affinity constants (k(NH4+app) : 0.50 g NH4+N/m(3) ; k(NO2app) 0.17 g NO2N/m(3)) that favour the presence of these nitrogen compounds in the produced effluent. The careful isolation of the reactor from air seeping and the fixation of right hydraulic and solids retention times are expected to help the maintenance of stability and effluent quality. (C) 2020 Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
Keywords: Anammox; Dissolved oxygen; Granular biomass; Nitrogen; SRT; Temperature
