
Argiz, L., Reyes, C., Belmonte, M., Franchi, O., Campo, R., FraVazquez, A., et al. (2020). Assessment of a fast method to predict the biochemical methane potential based on biodegradable COD obtained by fractionation respirometric tests. J. Environ. Manage., 269, 9 pp.
Abstract: The biochemical methane potential test (BMP) is the most common analytical technique to predict the performance of anaerobic digesters. However, this assay is timeconsuming (from 20 to over than 100 days) and consequently impractical when it is necessary to obtain a quick result. Several methods are available for faster BMP prediction but, unfortunately, there is still a lack of a clear alternative. Current aerobic tests underestimate the BMP of substrates since they only detect the easily biodegradable COD. In this context, the potential of COD fractionation respirometric assays, which allow the determination of the particulate slowly biodegradable fraction, was evaluated here as an alternative to early predict the BMP of substrates. Seven different origin waste streams were tested and the anaerobically biodegraded organic matter (CODmet) was compared with the different COD fractions. When considering adapted microorganisms, the appropriate operational conditions and the required biodegradation time, the differences between the CODmet, determined through BMP tests, and the biodegradable COD (CODb) obtained by respirometry, were not significant (CODmet (57.8026 +/ 21.2875) and CODb (55.6491 +/ 21.3417), t (5) = 0.189, p = 0.853). Therefore, results suggest that the BMP of a substrate might be early predicted from its CODb in only few hours. This methodology was validated by the performance of an interlaboratory studyconsidering four additional substrates.



Barrera, J., Carrasco, R. A., & Moreno, E. (2020). Realtime fleet management decision support system with security constraints. TOP, to appear, 21 pp.
Abstract: Intelligent transportation, and in particular, fleet management, has been a forefront concern for a plethora of industries. This statement is especially true for the production of commodities, where transportation represents a central element for operational continuity. Additionally, in many industries, and in particular those with hazardous environments, fleet control must satisfy a wide range of security restrictions to ensure that risks are kept at bay and accidents are minimum. Furthermore, in these environments, any decision support tool must cope with noisy and incomplete data and give recommendations every few minutes. In this work, a fast and efficient decision support tool is presented to help fleet managers oversee and control ore trucks, in a mining setting. The main objective of this system is to help managers avoid interactions between ore trucks and personnel buses, one of the most critical security constraints in our case study, keeping a minimum security distance between the two at all times. Furthermore, additional algorithms are developed and implemented, so that this approach can work with reallife noisy GPS data. Through the use of historical data, the performance of this decision support system is studied, validating that it works under the reallife conditions presented by the company. The experimental results show that the proposed approach improved truck and road utilization significantly while allowing the fleet manager to control the security distance required by their procedures.



Baselli, G., Contreras, F., Lillo, M., Marin, M., & Carrasco, R. A. (2020). Optimal decisions for salvage logging after wildfires. OmegaInt. J. Manage. Sci., 96, 9 pp.
Abstract: Strategic, tactical, and operational harvesting plans for the forestry and logging industry have been widely studied for more than 60 years. Many different settings and specific constraints due to legal, environmental, and operational requirements have been modeled, improving the performance of the harvesting process significantly. During the summer of 2017, Chile suffered from the most massive wildfires in its history, affecting almost half a million hectares, of which nearly half were forests owned by medium and small forestry companies. Some of the stands were burned by intense crown fires, which always spread fast, that burned the foliage and outer layer of the bark but left standing dead trees that could be salvage harvested before insect and decay processes rendered them unusable for commercial purposes. Unlike the typical operational programming models studied in the past, in this setting, companies can make insurance claims on part or all of the burnt forest, whereas the rest of the forest needs to be harvested before it loses its value. This problem is known as the salvage logging problem. The issue also has an important social component when considering medium and small forestry and logging companies: most of their personnel come from local communities, which have already been affected by the fires. Harvesting part of the remaining forest can allow them to keep their jobs longer and, hopefully, leave the company in a better financial situation if the harvesting areas are correctly selected. In this work, we present a novel mixedinteger optimizationbased approach to support salvage logging decisions, which helps in the configuration of an operationallevel harvesting and workforce assignment plan. Our model takes into account the payment from an insurance claim as well as future income from harvesting the remaining trees. The model also computes an optimal assignment of personnel to the different activities required. The objective is to improve the cash position of the company by the end of the harvest and ensure that the company is paying all its liabilities and maintaining personnel. We show how our model performs compared to the current decisions made by medium and smallsized forestry companies, and we study the specific case of a small forestry company located in Cauquenes, Chile, which used our model to decide its course of action. (C) 2019 Elsevier Ltd. All rights reserved.



Becker, F., Montealecre, P., Rapaport, I., & Todinca, I. (2020). The Impact Of Locality In The Broadcast Congested Clique Model. SIAM Discret. Math., 34(1), 682–700.
Abstract: The broadcast congested clique model (BCLIQUE) is a messagepassing model of distributed computation where n nodes communicate with each other in synchronous rounds. First, in this paper we prove that there is a oneround, deterministic algorithm that reconstructs the input graph G if the graph is ddegenerate, and rejects otherwise, using bandwidth b = O(d . log n). Then, we introduce a new parameter to the model. We study the situation where the nodes, initially, instead of knowing their immediate neighbors, know their neighborhood up to a fixed radius r. In this new framework, denoted BCLIQuE[r], we study the problem of detecting, in G, an induced cycle of length at most k (CYCLE <= k) and the problem of detecting an induced cycle of length at least k +1 (CYCLE>k). We give upper and lower bounds. We show that if each node is allowed to see up to distance r = left perpendicular k/2 right perpendicular + 1, then a polylogarithmic bandwidth is sufficient for solving CYCLE>k with only two rounds. Nevertheless, if nodes were allowed to see up to distance r = left perpendicular k/3 right perpendicular, then any oneround algorithm that solves CYCLE>k needs the bandwidth b to be at least Omega(n/ log n). We also show the existence of a oneround, deterministic BCLIQUE algorithm that solves CYCLE <= k with bandwitdh b = O(n(1/left perpendicular k/2 right perpendicular). log n). On the negative side, we prove that, if epsilon <= 1/3 and 0 < r <= k/4, then any epsilonerror, Rround, bbandwidth algorithm in the BCLIQUE[r] model that solves problem CYCLE(<= k )satisfies R . b = Omega(n(1/left perpendicular k/2 right perpendicular)).



Cando, M. A., Hube, M. A., Parra, P. F., & Arteta, C. A. (2020). Effect of stiffness on the seismic performance of code conforming reinforced concrete shear wall buildings. Eng. Struct., 219, 14 pp.
Abstract: This study assesses the effect of the stiffness on the seismic performance of residential shear wall buildings designed according to current Chilean regulations, including DS60 and DS61. Specifically, the paper focuses on the effect of stiffness on the building overstrength, displacement ductility, fragility for Life Safety (LS) and collapse limit states, as well as the probability of achieving these two limits states in 50 years. The seismic performance is assessed for a group of four 20 story residential shear wall buildings archetypes located in Santiago. Walls were modeled using the multiple vertical line element model (MVLEM) with inelastic hysteretic materials for the vertical elements, and a linear elastic shear behavior. Pushover analyses were considered to estimate the buildings overstrength and displacement ductility, while incremental dynamic analyses were per formed to estimate fragility curves. A probabilistic seismic hazard analysis, which considered the seismicity of Chile central zone, was performed to estimate the probability of achieving the two limits states in 50 years. The results show that an increase in the stiffness reduces the chance of exceeding the LS and collapse limit states for the same intensity level. Additionally, the probabilistic seismic hazard analysis shows that, when the stiffness increases, the probability of reaching the LS limit state in 50 years also decreases. Counterintuitively, the probability of collapse in 50 years increases as the stiffness increases, due to the considered seismic hazard and the design requirements. Since society is moving towards resilient structural designs that minimize damage, disruption and economic losses, it is concluded that the performance of reinforced concrete shear wall buildings is improved by increasing the stiffness.



Canessa, E., & Chaigneau, S. E. (2020). Mathematical regularities of data from the property listing task. J. Math. Psychol., 97, 19 pp.
Abstract: To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list “is a pet”, “has four legs”, etc.), which are then coded into property types (i.e., superficially dissimilar properties such as “has four legs” and “is a quadruped” may be coded as “four legs”). When the PLT is done for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, there is a lack of a formal model of the PLT, which would provide better analysis tools. Particularly, nobody has attempted analyzing the PLT's listing process. Thus, in the current work we develop a mathematical description of the PLT. Our analyses indicate that several regularities should be found in the observable data obtained from a PLT. Using data from three different CPNs (from 3 countries and 2 different languages), we show that these regularities do in fact exist and generalize well across different CPNs. Overall, our results suggest that the description of the regularities found in PLT data may be fruitfully used in the study of concepts. (C) 2020 Elsevier Inc. All rights reserved.



Canessa, E., Chaigneau, S. E., Lagos, R., & Medina, F. A. (2020). How to carry out conceptual properties norming studies as parameter estimation studies: Lessons from ecology. Behavior Research Methods, to appear.
Abstract: Conceptual properties norming studies (CPNs) ask participants to produce properties that describe concepts. From that data, different metrics may be computed (e.g., semantic richness, similarity measures), which are then used in studying concepts and as a source of carefully controlled stimuli for experimentation. Notwithstanding those metrics' demonstrated usefulness, researchers have customarily overlooked that they are only point estimates of the true unknown population values, and therefore, only rough approximations. Thus, though research based on CPN data may produce reliable results, those results are likely to be general and coarsegrained. In contrast, we suggest viewing CPNs as parameter estimation procedures, where researchers obtain only estimates of the unknown population parameters. Thus, more specific and finegrained analyses must consider those parameters' variability. To this end, we introduce a probabilistic model from the field of ecology. Its related statistical expressions can be applied to compute estimates of CPNs' parameters and their corresponding variances. Furthermore, those expressions can be used to guide the sampling process. The traditional practice in CPN studies is to use the same number of participants across concepts, intuitively believing that practice will render the computed metrics comparable across concepts and CPNs. In contrast, the current work shows why an equal number of participants per concept is generally not desirable. Using CPN data, we show how to use the equations and discuss how they may allow more reasonable analyses and comparisons of parameter values among different concepts in a CPN, and across different CPNs.



Canessa, E., Chaigneau, S. E., Moreno, S., & Lagos, R. (2020). Informational content of cosine and other similarities calculated from highdimensional Conceptual Property Norm data. Cogn. Process., to appear, 14 pp.
Abstract: To study concepts that are coded in language, researchers often collect lists of conceptual properties produced by human subjects. From these data, different measures can be computed. In particular, interconcept similarity is an important variable used in experimental studies. Among possible similarity measures, the cosine of conceptual property frequency vectors seems to be a de facto standard. However, there is a lack of comparative studies that test the merit of different similarity measures when computed from property frequency data. The current work compares four different similarity measures (cosine, correlation, Euclidean and Chebyshev) and five different types of data structures. To that end, we compared the informational content (i.e., entropy) delivered by each of those 4 x 5 = 20 combinations, and used a clustering procedure as a concrete example of how informational content affects statistical analyses. Our results lead us to conclude that similarity measures computed from lowerdimensional data fare better than those calculated from higherdimensional data, and suggest that researchers should be more aware of data sparseness and dimensionality, and their consequences for statistical analyses.



Carmichael, T. W., Quinn, S. N., Mustill, A. J., Huang, C., Zhou, G., Persson, C. M., et al. (2020). Two Intermediatemass Transiting Brown Dwarfs from the TESS Mission. Astron. J., 160(1), 15 pp.
Abstract: We report the discovery of two intermediatemass transiting brown dwarfs (BDs), TOI569b and TOI1406b, from NASA's Transiting Exoplanet Survey Satellite mission. TOI569b has an orbital period of P = 6.55604 0.00016 days, a mass of Mb = 64.1 1.9 , and a radius of Rb = 0.75 0.02 . Its host star, TOI569, has a mass of Mstar = 1.21 0.05, a radius of Rstar = 1.47 0.03 dex, and an effective temperature of Teff = 5768 110 K. TOI1406b has an orbital period of P = 10.57415 0.00063 days, a mass of Mb = 46.0 2.7 , and a radius of Rb = 0.86 0.03 . The host star for this BD has a mass of Mstar = 1.18 0.09 a radius of Rstar = 1.35 0.03 dex, and an effective temperature of Teff = 6290 100 K. Both BDs are in circular orbits around their host stars and are older than 3 Gyr based on stellar isochrone models of the stars. TOI569 is one of two slightly evolved stars known to host a transiting BD (the other being KOI415). TOI1406b is one of three known transiting BDs to occupy the mass range of 4050 and one of two to have a circular orbit at a period near 10 days (with the first being KOI205b). Both BDs have reliable ages from stellar isochrones, in addition to their wellconstrained masses and radii, making them particularly valuable as tests for substellar isochrones in the BD massradius diagram.



Crutchik, D., Franchi, O., Caminos, L., Jeison, D., Belmonte, M., Pedrouso, A., et al. (2020). Polyhydroxyalkanoates (PHAs) Production: A Feasible Economic Option for the Treatment of Sewage Sludge in Municipal Wastewater Treatment Plants? Water, 12(4), 12 pp.
Abstract: Sludge is a byproduct of municipal wastewater treatment plants (WWTPs) and its management contributes significantly to the operating costs. Large WWTPs usually have anaerobic sludge digesters to valorize sludge as methane and to reduce its mass. However, the low methane market price opens the possibility for generating other high valueadded products from the organic matter in sludge, such as polyhydroxyalkanoates (PHAs). In this work, the economic feasibility of retrofitting two types of WWTPs to convert them into biofactories of crude PHAs was studied. Two cases were analyzed: (a) a large WWTP with anaerobic sludge digestion; and (b) a small WWTP where sludge is only dewatered. In a twostage PHAproduction system (biomass enrichment plus PHAs accumulation), the minimum PHAs cost would be 1.26 and 2.26 US$/kg PHAcrude for the large and small WWTPs, respectively. In a singlestage process, where a fraction of the secondary sludge (25%) is directly used to accumulate PHAs, the production costs would decrease by around 15.9% (small WWTPs) and 19.0% (large WWTPs), since capital costs associated with bioreactors decrease. Sensitivity analysis showed that the PHA/COD (Chemical Oxygen Demand) yield is the most crucial parameter affecting the production costs. The energy, methane, and sludge management prices also have an essential effect on the production costs, and their effect depends on the WWTP's size.



Diaz, C., Belmonte, M., Campos, J. L., Franchi, O., Faundez, M., Vidal, G., et al. (2020). Limits of the anammox process in granular systems to remove nitrogen at low temperature and nitrogen concentration. Process Saf. Environ. Protect., 138, 349–355.
Abstract: When partial nitritationanammox (PNAMX) processes are applied to treat the mainstream in wastewater treatment plants (WWTPs), it is difficult to fulfil the total nitrogen (TN) quality requirements established by the European Union (<10g TN/m(3)). The operation of the anammox process was evaluated here in a continuous stirred tank reactor operated at 15 degrees C and fed with concentrations of 50 g TN/m(3) (1.30 +/ 0.23 g NO2 N/g NH4+N). Two different aspects were identified as crucial, limiting nitrogen removal efficiency. On the one hand, the oxygen transferred from the air in contact with the mixed liquor surface favoured the nitrite oxidation to nitrate (up to 75 %) and this nitrate, in addition to the amount produced from the anammox reaction itself, worsened the effluent quality. On the other hand, the mass transfer of ammonium and nitrite to be converted inside the anammox granules involves relatively large values of apparent affinity constants (k(NH4+app) : 0.50 g NH4+N/m(3) ; k(NO2app) 0.17 g NO2N/m(3)) that favour the presence of these nitrogen compounds in the produced effluent. The careful isolation of the reactor from air seeping and the fixation of right hydraulic and solids retention times are expected to help the maintenance of stability and effluent quality. (C) 2020 Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.



Fernandez, M., Munoz, F. D., & Moreno, R. (2020). Analysis of imperfect competition in natural gas supply contracts for electric power generation: A closedloop approach. Energy Econ., 87, 15 pp.
Abstract: The supply of natural gas is generally based on contracts that are signed prior to the use of this fuel for power generation. Scarcity of natural gas in systems where a share of electricity demand is supplied with gas turbines does not necessarily imply demand rationing, because most gas turbines can still operate with diesel when natural gas is not available. However, scarcity conditions can lead to electricity price spikes, with welfare effects for consumers and generation firms. We develop a closedloop equilibrium model to evaluate if generation firms have incentives to contract or import the sociallyoptimal volumes of natural gas to generate electricity. We consider a perfectlycompetitive electricity market, where all firms act as pricetakers in the short term, but assume that only a small number of firms own gas turbines and procure natural gas from, for instance, foreign suppliers in liquefied form. We illustrate an application of our model using a network reduction of the electric power system in Chile, considering two strategic firms that make annual decisions about natural gas imports in discrete quantities. We also assume that strategic firms compete in the electricity market with a set of competitive firms do not make strategic decisions about natural gas imports (i.e., a competitive fringe). Our results indicate that strategic firms could have incentives to sign natural gas contracts for volumes that are much lower than the sociallyoptimal ones, which leads to supernormal profits for these firms in the electricity market. Yet, this effect is rather sensitive to the price of natural gas. A high price of natural gas eliminates the incentives of generation firms to exercise market power through natural gas contracts. (C) 2020 Elsevier B.V. All rights reserved.



Goles, E., Montealegre, P., & RiosWilson, M. (2020). On The Effects Of Firing Memory In The Dynamics Of Conjunctive Networks. Discret. Contin. Dyn. Syst., 40(10), 5765–5793.
Abstract: A boolean network is a map F : {0, 1}(n) > {0, 1}(n) that defines a discrete dynamical system by the subsequent iterations of F. Nevertheless, it is thought that this definition is not always reliable in the context of applications, especially in biology. Concerning this issue, models based in the concept of adding asynchronicity to the dynamics were propose. Particularly, we are interested in a approach based in the concept of delay. We focus in a specific type of delay called firing memory and it effects in the dynamics of symmetric (nondirected) conjunctive networks. We find, in the caseis in which the implementation of the delay is not uniform, that all the complexity of the dynamics is somehow encapsulated in the component in which the delay has effect. Thus, we show, in the homogeneous case, that it is possible to exhibit attractors of nonpolynomial period. In addition, we study the prediction problem consisting in, given an initial condition, determinate if a fixed coordinate will eventually change its state. We find again that in the nonhomogeneous case all the complexity is determined by the component that is affected by the delay and we conclude in the homogeneous case that this problem is PSPACEcomplete.



Goles, E., Tsompanas, M. A., Adamatzky, A., Tegelaar, M., Wosten, H. A. B., & Martinez, G. J. (2020). Computational universality of fungal sandpile automata. Phys. Lett. A, 384(22), 8 pp.
Abstract: Hyphae within the mycelia of the ascomycetous fungi are compartmentalised by septa. Each septum has a pore that allows for intercompartmental and interhyphal streaming of cytosol and even organelles. The compartments, however, have special organelles, Woronin bodies, that can plug the pores. When the pores are blocked, no flow of cytoplasm takes place. Inspired by the controllable compartmentalisation within the mycelium of the ascomycetous fungi we designed twodimensional fungal automata. A fungal automaton is a cellular automaton where communication between neighbouring cells can be blocked on demand. We demonstrate computational universality of the fungal automata by implementing sandpile cellular automata circuits there. We reduce the Monotone Circuit Value Problem to the Fungal Automaton Prediction Problem. We construct families of wires, crossovers and gates to prove that the fungal automata are Pcomplete. (C) 2020 Elsevier B.V. All rights reserved.



Golovach, P. A., Heggernes, P., Lima, P. T., & Montealegre, P. (2020). Finding connected secluded subgraphs. J. Comput. Syst. Sci., 113, 101–124.
Abstract: Problems related to finding induced subgraphs satisfying given properties form one of the most studied areas within graph algorithms. However, for many applications, it is desirable that the found subgraph has as few connections to the rest of the graph as possible, which gives rise to the SECLUDED PiSUBGRAPH problem. Here, input k is the size of the desired subgraph, and input t is a limit on the number of neighbors this subgraph has in the rest of the graph. This problem has been studied from a parameterized perspective, and unfortunately it turns out to be W[1]hard for many graph properties Pi, even when parameterized by k + t. We show that the situation changes when we are looking for a connected induced subgraph satisfying Pi. In particular, we show that the CONNECTED SECLUDED PiSUBGRAPH problem is FPT when parameterized by just t for many important graph properties Pi. (C) 2020 Elsevier Inc. All rights reserved.



Guevara, E., Babonneau, F., HomemdeMello, T., & Moret, S. (2020). A machine learning and distributionally robust optimization framework for strategic energy planning under uncertainty. Appl. Energy, 271, 18 pp.
Abstract: This paper investigates how the choice of stochastic approaches and distribution assumptions impacts strategic investment decisions in energy planning problems. We formulate a twostage stochastic programming model assuming different distributions for the input parameters and show that there is significant discrepancy among the associated stochastic solutions and other robust solutions published in the literature. To remedy this sensitivity issue, we propose a combined machine learning and distributionally robust optimization (DRO) approach which produces more robust and stable strategic investment decisions with respect to uncertainty assumptions. DRO is applied to deal with ambiguous probability distributions and Machine Learning is used to restrict the DRO model to a subset of important uncertain parameters ensuring computational tractability. Finally, we perform an outofsample simulation process to evaluate solutions performances. The Swiss energy system is used as a case study all along the paper to validate the approach.



Hojman, S. A., & Asenjo, F. A. (2020). Classical and Quantum Dispersion Relations. Phys. Scr., 95(8), 7 pp.
Abstract: It is showed that, in general, classical and quantum dispersion relations are different due to the presence of the Bohm potential. There are exact particular solutions of the quantum (wave) theory which obey the classical dispersion relation, but they differ in the general case. The dispersion relations may also coincide when additional assumptions are made, such as WKB or eikonal approximations, for instance. This general result also holds for nonquantum wave equations derived from classical counterparts, such as in ray and wave optics, for instance. Explicit examples are given for covariant scalar, vectorial and tensorial fields in flat and curved spacetimes.



Josserand, C., Pomeau, Y., & Rica, S. (2020). Finitetime localized singularities as a mechanism for turbulent dissipation. Phys. Rev. Fluids, 5(5), 15 pp.
Abstract: The nature of the fluctuations of the dissipation rate in fluid turbulence is still under debate. One reason may be that the observed fluctuations are strong events of dissipation, which reveal the trace of spatiotemporal singularities of the Euler equations, which are the zero viscosity limit of ordinary incompressible fluids. Viscosity regularizes these hypothetical singularities, resulting in a chaotic fluctuating state in which the strong events appear randomly in space and time, making the energy dissipation highly fluctuating. Yet, to date, it is not known if smooth initial conditions of the Euler equations with finite energy do or do not blow up in finite time. We overcome this central difficulty by providing a scenario for singularitymediated turbulence based on the selffocusing nonlinear Schrodinger equation. It avoids the intrinsic difficulty of Euler equations since it is well known that solutions of this NLS equation with smooth initial conditions do blow up in finite time. When adding viscosity, the model shows (i) that dissipation takes place near the singularities only, (ii) that such intense events are random in space and time, (iii) that the mean dissipation rate is almost constant as the viscosity varies, and (iv) the observation of an ObukhovKolmogorov spectrum with a powerlaw dependence together with an intermittent behavior using structure function correlations, in close correspondence with the one measured in fluid turbulence.



Kamal, C., Gravelle, S., & Botto, L. (2020). Hydrodynamic slip can align thin nanoplatelets in shear flow. Nat. Commun., 11(1), 10 pp.
Abstract: The largescale processing of nanomaterials such as graphene and MoS2 relies on understanding the flow behaviour of nanometricallythin platelets suspended in liquids. Here we show, by combining nonequilibrium molecular dynamics and continuum simulations, that rigid nanoplatelets can attain a stable orientation for sufficiently strong flows. Such a stable orientation is in contradiction with the rotational motion predicted by classical colloidal hydrodynamics. This surprising effect is due to hydrodynamic slip at the liquidsolid interface and occurs when the slip length is larger than the platelet thickness; a slip length of a few nanometers may be sufficient to observe alignment. The predictions we developed by examining pure and surfacemodified graphene is applicable to different solvent/2D material combinations. The emergence of a fixed orientation in a direction nearly parallel to the flow implies a slipdependent change in several macroscopic transport properties, with potential impact on applications ranging from functional inks to nanocomposites. Current theories predict that a platelike particle rotates continuously in a shear flow. Kamal et al. instead show that even nanometric hydrodynamic slip may induce a thin platelike particle to adopt a stable orientation, and discuss implications of this effect for flow processing of 2D nanomaterials.



Lagos, F., Schreiber, M. R., Parsons, S. G., Zurlo, A., Mesa, D., Gansicke, B. T., et al. (2020). The White Dwarf Binary Pathways Survey III. Contamination from hierarchical triples containing a white dwarf. Mon. Not. Roy. Astron. Soc., 494(1), 915–922.
Abstract: The White Dwarf Binary Pathways Survey aims at increasing the number of known detached A, F, G, and K mainsequence stars in close orbits with white dwarf companions (WD+AFGK binaries) to refine our understanding about compact binary evolution and the nature of Supernova Ia progenitors. These close WD+AFGK binary stars are expected to form through common envelope evolution, in which tidal forces tend to circularize the orbit. However, some of the identified WD+AFGK binary candidates show eccentric orbits, indicating that these systems are either formed through a different mechanism or perhaps they are not close WD+AFGK binaries. We observed one of these eccentric WD+AFGK binaries with SPHERE and find that the system TYC 72189341 is in fact a triple system where the WD is a distant companion. The inner binary likely consists of the Gtype star plus an unseen lowmass companion in an eccentric orbit. Based on this finding, we estimate the fraction of triple systems that could contaminate the WD+AFGK sample. We find that less than 15 per cent of our targets with orbital periods shorter than 100 d might be hierarchical triples.

