
Allende, H., Salas, R., & Moraga, C. (2003). A robust and effective learning algorithm for feedforward neural networks based on the influence function. Lect. Notes Comput. Sc., 2652, 28–36.
Abstract: The learning process of the Feedforward Artificial Neural Networks relies on the data, though a robustness analysis of the parameter estimates of the model must be done due to the presence of outlying observations in the data. In this paper we seek the robust properties in the parameter estimates in the sense that the influence of aberrant observations or outliers in the estimate is bounded so the neural network is able to model the bulk of data. We also seek a trade off between robustness and efficiency under a Gaussian model. An adaptive learning procedure that seeks both aspects is developed. Finally we show some simulations results applied to the RESEX time series.



Alvarenga, T. C., De Lima, R. R., Simao, S. D., Junior, L. C. B., Bueno, J. S. D., Alvarenga, R. R., et al. (2022). Ensemble of hybrid Bayesian networks for predicting the AMEn of broiler feedstuffs. Comput. Electron. Agric., 198, 107067.
Abstract: To adequately meet the nutritional needs of broilers, it is necessary to know the values of apparent metabolizable energy corrected by the nitrogen balance (AMEn) of the feedstuffs. To determine AMEn values, biological assays, feedstuff composition tables, or prediction equations are used as a function of the chemical composition of feedstuffs, the latter using statistical methodologies such as multiple linear regression, neural networks, and Bayesian networks (BN). BN is a statistical and computational methodology that consists of graphical (graph) and probabilistic models of quantitative and/or qualitative variables. Ensembles of BN in the area of broiler nutrition are expected, as there is no research showing their AMEn prediction performance. The purpose of this article is to propose and use ensembles of hybrid Bayesian networks (EHBNs) and obtain prediction equations for the AMEn of feedstuffs used in broiler nutrition from their chemical compositions. We trained 100, 1,000, and 10,000 EHBN, and in this way, empirical distributions were found for the coefficients of the covariates (crude protein, ether extract, mineral matter, and crude fiber). Thus, the mean, median, and mode of these distributions were calculated to build prediction equations for AMEn. It is observed that the method for obtaining the coefficients of the covariates discussed in this article is an unprecedented proposal in the field of broiler nutrition. The data used to obtain the equations were obtained by metaanalysis, and the data for the validation of the equations were obtained from metabolic tests. The proposed equations were evaluated by precision measures such as the mean square error (MSE), mean absolute deviation (MAD), and mean absolute percentage error (MAPE). The best equations for predicting AMEn were derived from the mean or mode coefficients for the 10,000 EHBN results. In conclusion, the methodology used is a good tool to obtain prediction equations for AMEn as a function of the chemical composition of their feedstuffs. The coefficients were found to differ from those found by other methodologies, such as the usual neural network or multiple linear regressions. The field of broiler nutrition contributed with new equations and with a neverapplied methodology and differentiated in obtaining its coefficients by empirical distributions.



AlvarezMiranda, E., & Pereira, J. (2017). Designing and constructing networks under uncertainty in the construction stage: Definition and exact algorithmic approach. Comput. Oper. Res., 81, 178–191.
Abstract: The present work proposes a novel Network Optimization problem whose core is to combine both network design and network construction scheduling under uncertainty into a single twostage robust optimization model. The firststage decisions correspond to those of a classical network design problem, while the secondstage decisions correspond to those of a network construction scheduling problem (NCS) under uncertainty. The resulting problem, which we will refer to as the TwoStage Robust Network Design and Construction Problem (2SRNDC), aims at providing a modeling framework in which the design decision not only depends on the design costs (e.g., distances) but also on the corresponding construction plan (e.g., time to provide service to costumers). We provide motivations, mixed integer programming formulations, and an exact algorithm for the 2SRNDC. Experimental results on a large set of instances show the effectiveness of the model in providing robust solutions, and the capability of the proposed algorithm to provide good solutions in reasonable running times. (C) 2017 Elsevier Ltd. All rights reserved.



AlvarezMiranda, E., & Pereira, J. (2019). On the complexity of assembly line balancing problems. Comput. Oper. Res., 108, 182–186.
Abstract: Assembly line balancing is a family of combinatorial optimization problems that has been widely studied in the literature due to its simplicity and industrial applicability. Most line balancing problems are NPhard as they subsume the bin packing problem as a special case. Nevertheless, it is common in the line balancing literature to cite [A. Gutjahr and G. Nemhauser, An algorithm for the line balancing problem, Management Science 11 (1964) 308315] in order to assess the computational complexity of the problem. Such an assessment is not correct since the work of Gutjahr and Nemhauser predates the concept of NPhardness. This work points at over 50 publications since 1995 with the aforesaid error. (C) 2019 Elsevier Ltd. All rights reserved.



Araujo, J., Ducoffe, G., Nisse, N., & Suchan, K. (2018). On interval number in cycle convexity. Discret. Math. Theor. Comput. Sci., 20(1), 35 pp.
Abstract: Recently, Araujo et al. [Manuscript in preparation, 2017] introduced the notion of Cycle Convexity of graphs. In their seminal work, they studied the graph convexity parameter called hull number for this new graph convexity they proposed, and they presented some of its applications in Knot theory. Roughly, the tunnel number of a knot embedded in a plane is upper bounded by the hull number of a corresponding planar 4regular graph in cycle convexity. In this paper, we go further in the study of this new graph convexity and we study the interval number of a graph in cycle convexity. This parameter is, alongside the hull number, one of the most studied parameters in the literature about graph convexities. Precisely, given a graph G, its interval number in cycle convexity, denoted by in(cc)(G), is the minimum cardinality of a set S subset of V (G) such that every vertex w is an element of E V (G) \ S has two distinct neighbors u, v is an element of S such that u and v lie in same connected component of G[S], i.e. the subgraph of G induced by the vertices in S. In this work, first we provide bounds on in(cc) (G) and its relations to other graph convexity parameters, and explore its behaviour on grids. Then, we present some hardness results by showing that deciding whetherin(cc) (G) <= k is NPcomplete, even if G is a split graph or a boundeddegree planar graph, and that the problem is W[2]hard in bipartite graphs when k is the parameter. As a consequence, we obtain that in(cc) (G) cannot be approximated up to a constant factor in the classes of split graphs and bipartite graphs (unless P = NP). On the positive side, we present polynomialtime algorithms to compute in(cc) (G) for outerplanar graphs, cobipartite graphs and interval graphs. We also present fixedparameter tractable (FPT) algorithms to compute it for (q, q – 4)graphs when q is the parameter and for general graphs G when parameterized either by the treewidth or the neighborhood diversity of G. Some of our hardness results and positive results are not known to hold for related graph convexities and domination problems. We hope that the design of our new reductions and polynomialtime algorithms can be helpful in order to advance in the study of related graph problems.



Atkinson, J., & Maurelia, A. (2017). RedundancyBased Trust in QuestionAnswering Systems. Computer, 50(1), 58–65.
Abstract: By combining user preferences, redundancy analysis, and trustnetwork inference, the proposed trust model can augment candidate answers with information about target sources on the basis of connections with other web users and sources. Experiments show that the model is more effective overall than trust analyses based on inference alone.



Averbakh, I., & Pereira, J. (2021). Tree optimization based heuristics and metaheuristics in network construction problems. Comput. Oper. Res., 128, 105190.
Abstract: We consider a recently introduced class of network construction problems where edges of a transportation network need to be constructed by a server (construction crew). The server has a constant construction speed which is much lower than its travel speed, so relocation times are negligible with respect to construction times. It is required to find a construction schedule that minimizes a nondecreasing function of the times when various connections of interest become operational. Most problems of this class are strongly NPhard on general networks, but are often treeefficient, that is, polynomially solvable on trees. We develop a generic local search heuristic approach and two metaheuristics (Iterated Local Search and Tabu Search) for solving treeefficient network construction problems on general networks, and explore them computationally. Results of computational experiments indicate that the methods have excellent performance.



Canals, C., Maroulis, S., Canessa, E., Chaigneau, S., & Mizala, A. (2022). Mechanisms Underlying ChoiceSet Formation: The Case of School Choice in Chile. Soc. Sci. Comput. Rev., Early Access.
Abstract: Many decisions involve selecting among many more options than an individual can effectively examine and consider. Therefore, people usually consider smaller and different “choice sets” as viable options. To better understand the processes affecting choiceset formation, we developed a computational model of how households become aware of potential choices in a context for which understanding household decisionmaking has important public policy implications: marketbased reforms in education. In the model, households learn about the schools to which they can send their children through three mechanisms: find out about geographically proximate schools, access to publicly available information, and information gathered from interactions with other households. We calibrated the model using data from four cities in Chile, where students are not required to attend their neighborhood school. We then used the model to conduct hypothetical computational experiments that assessed how each mechanism impacted the sets of schools known to households before they make their choice (their “awareness set”). We found that the inclusion of a social interaction mechanism was crucial for producing simulated awareness sets that matched the awareness sets provided in a survey conducted by the Chilean Ministry of Education. We also found that the social interaction mechanism played the largest role in determining the quality and price range of the choices available in households’ awareness sets. Our findings highlight the importance of social interactions in a stage of decisionmaking before the direct impact of other individuals is typically made explicit. Moreover, it validates an approach that can be used in future models where understanding how decisionmakers become aware of their options may be as important as the way they choose among them.



Carrasco, R. A., Pruhs, K., Stein, C., & Verschae, J. (2018). The Online Set Aggregation Problem. In Lecture Notes in Computer Sciences (Vol. 10807, pp. 245–259).



de Figueiredo, C. M. H., de Mello, C. P., & Ortiz, C. (2000). Edge colouring reduced indifference graphs. Lect. Notes Comput. Sc., 1776, 145–153.
Abstract: The chromatic index problem – finding the minimum number of colours required for colouring the edges of a graph – is still unsolved for indifference graphs, whose vertices can be linearly ordered so that the vertices contained in the same maximal clique are consecutive in this order. Two adjacent vertices are twins if they belong to the same maximal cliques. A graph is reduced if it contains no pair of twin vertices. A graph is overfull if the total number of edges is greater than the product of the maximum degree by [n/2], where n is the number of vertices. We give a structural characterization for neighbourhoodover full indifference graphs proving that a reduced indifference graph cannot be neighbourhoodoverfull. We show that the chromatic index for all reduced indifference graphs is the maximum degree.



de Figueiredo, C. M. H., Meldanis, J., de Mello, C. P., & Ortiz, C. (2003). Decompositions for the edge colouring of reduced indifference graphs. Theor. Comput. Sci., 297(13), 145–155.
Abstract: The chromatic index problemfinding the minimum number of colours required for colouring the edges of a graphis still unsolved for indifference graphs, whose vertices can be linearly ordered so that the vertices contained in the same maximal clique are consecutive in this order. We present new positive evidence for the conjecture: every non neighbourhoodoverfull indifference graph can be edge coloured with maximum degree colours. Two adjacent vertices are twins if they belong to the same maximal cliques. A graph is reduced if it contains no pair of twin vertices. A graph is overfull if the total number of edges is greater than the product of the maximum degree by [n/2], where n is the number of vertices. We give a structural characterization for neighbourhoodoverfull indifference graphs proving that a reduced indifference graph cannot be neighbourhoodoverfull. We show that the chromatic index for all reduced indifference graphs is the maximum degree. We present two decomposition methods for edge colouring reduced indifference graphs with maximum degree colours. (C) 2002 Elsevier Science B.V. All rights reserved.



Dölz, J., Harbrecht, H., JerezHanckes, C., & Multerer M. (2022). Isogeometric multilevel quadrature for forward and inverse random acoustic scattering. Comput. Methods in Appl. Mech. Eng., 388, 114242.
Abstract: We study the numerical solution of forward and inverse timeharmonic acoustic scattering problems by randomly shaped obstacles in threedimensional space using a fast isogeometric boundary element method. Within the isogeometric framework, realizations of the random scatterer can efficiently be computed by simply updating the NURBS mappings which represent the scatterer. This way, we end up with a random deformation field. In particular, we show that it suffices to know the deformation field’s expectation and covariance at the scatterer’s boundary to model the surface’s Karhunen–Loève expansion. Leveraging on the isogeometric framework, we employ multilevel quadrature methods to approximate quantities of interest such as the scattered wave’s expectation and variance. By computing the wave’s Cauchy data at an artificial, fixed interface enclosing the random obstacle, we can also directly infer quantities of interest in free space. Adopting the Bayesian paradigm, we finally compute the expected shape and variance of the scatterer from noisy measurements of the scattered wave at the artificial interface. Numerical results for the forward and inverse problems validate the proposed approach.



EscapilInchauspe, P., & JerezHanckes, C. (2021). Biparametric operator preconditioning. Comput. Math. Appl., 102, 220–232.
Abstract: We extend the operator preconditioning framework Hiptmair (2006) [10] to PetrovGalerkin methods while accounting for parameterdependent perturbations of both variational forms and their preconditioners, as occurs when performing numerical approximations. By considering different perturbation parameters for the original form and its preconditioner, our biparametric abstract setting leads to robust and controlled schemes. For Hilbert spaces, we derive exhaustive linear and superlinear convergence estimates for iterative solvers, such as hindependent convergence bounds, when preconditioning with lowaccuracy or, equivalently, with highly compressed approximations.



Ferran, S., Beghelli, A., HuertaCanepa, G., & Jensen, F. (2018). Correctness assessment of a crowdcoding project in a computer programming introductory course. Comput. Appl. Eng. Educ., 26(1), 162–170.
Abstract: Crowdcoding is a programming model that outsources a software project implementation to the crowd. As educators, we think that crowdcoding could be leveraged as part of the learning path of engineering students from a computer programming introductory course to solve local community problems. The benefits are twofold: on the one hand the students practice the concepts learned in class and, on the other hand, they participate in reallife problems. Nevertheless, several challenges arise when developing a crowdcoding platform, the first one being how to check the correctness of student's code without giving an extra burden to the professors in the course. To overcome this issue, we propose a novel system that does not resort to expert review; neither requires knowing the right answers beforehand. The proposed scheme automatically clusters the student's codes based solely on the output they produce. Our initial results show that the largest cluster contains the same codes selected as correct by the automated and human testing, as long as some conditions apply.



Figueroa, A., & Atkinson, J. (2019). DualView Learning for Detecting Web Query Intents. Computer, 52(8), 34–42.
Abstract: Automatically categorizing user intent behind web queries is a key issue not only for improving information retrieval tasks but also for designing tailored displays based on the underlying intention. In this article, a multiview learning method is proposed to recognize the user intent behind web searches.



Fomin, F. V., Golovach, P. A., Kratochvil, J., Nisse, N., & Suchan, K. (2010). Pursuing a fast robber on a graph. Theor. Comput. Sci., 411(79), 1167–1181.
Abstract: The Cops and Robbers game as originally defined independently by Quilliot and by Nowakowski and Winkler in the 1980s has been Much Studied, but very few results pertain to the algorithmic and complexity aspects of it. In this paper we prove that computing the minimum number of cops that are guaranteed to catch a robber on a given graph is NPhard and that the parameterized version of the problem is W[2]hard; the proof extends to the case where the robber moves s time faster than the cops. We show that on split graphs, the problem is polynomially solvable if s = 1 but is NPhard if s = 2. We further prove that on graphs of bounded cliquewidth the problem is polynomially solvable for s <= 2. Finally, we show that for planar graphs the minimum number of cops is unbounded if the robber is faster than the cops. (C) 2009 Elsevier B.V. All rights reserved.



Fraigniaud, P., MontealegreBarba, P., Oshman, R., Rapaport, I., & Todinca, I. (2019). On Distributed MerlinArthur Decision Protocols. In Lecture Notes in Computer Sciences (Vol. 11639).



Gajardo, A., & Goles, E. (2006). Crossing information in twodimensional Sandpiles. Theor. Comput. Sci., 369(13), 463–469.
Abstract: We prove that in a twodimensional Sandpile automaton, embedded in a regular infinite planar cellular space, it is impossible to cross information, if the bit of information is the presence (or absence) of an avalanche. This proves that it is impossible to embed arbitrary logical circuits in a Sandpile through quiescent configurations. Our result applies also for the nonplanar neighborhood of Moore. Nevertheless, we also show that it is possible to compute logical circuits with a twodimensional Sandpile, if a neighborhood of radius two is used in Z(2); crossing information becomes possible in that case, and we conclude that for this neighborhood the Sandpde is Pcomplete and Turing universal. (c) 2006 Elsevier B.V. All rights reserved.



Garreton, M., & Sanchez, R. (2016). Identifying an optimal analysis level in multiscalar regionalization: A study case of social distress in Greater Santiago. Comput. Environ. Urban Syst., 56, 14–24.
Abstract: Assembling spatial units into meaningful clusters is a challenging task, as it must cope with a consequential computational complexity while controlling for the modifiable areal unit problem (MAUP), spatial autocorrelation and attribute multicolinearity. Nevertheless, these effects can reveal significant interactions among diverse spatial phenomena, such as segregation and economic specialization. Various regionalization methods have been developed in order to address these questions, but key fundamental properties of the aggregation of spatial entities are still poorly understood. In particular, due to the lack of an objective stopping rule, the question of determining an optimal number of clusters is yet unresolved. Therefore, we develop a clustering algorithm which is sensitive to scalar variations of multivariate spatial correlations, recalculating PCA scores at several aggregation steps in order to account for differences in the span of autocorrelation effects for diverse variables. With these settings, the scalar evolution of correlation, compactness and isolation measures is compared between empirical and 120 random datasets, using two dissimilarity measures. Remarkably, adjusting several indicators with real and simulated data allows for a clear definition of a stopping rule for spatial hierarchical clustering. Indeed, increasing correlations with scale in random datasets are spurious MAUP effects, so they can be discounted from real data results in order to identify an optimal clustering level, as defined by the maximum of authentic spatial selforganization. This allows singling out the most socially distressed areas in Greater Santiago, thus providing relevant sociospatial insights from their cartographic and statistical analysis. In sum, we develop a useful methodology to improve the fundamental comprehension of spatial interdependence and multiscalar selforganizing phenomena, while linking these questions to relevant real world issues. (c) 2015 Elsevier Ltd. All rights reserved.



Goles, E., & Montealegre, P. (2014). Computational complexity of threshold automata networks under different updating schemes. Theor. Comput. Sci., 559, 3–19.
Abstract: Given a threshold automata network, as well as an updating scheme over its vertices, we study the computational complexity associated with the prediction of the future state of a vertex. More precisely, we analyze two classes of local functions: the majority and the ANDOR rule (vertices take the AND or the OR logic functions over the state of its neighborhoods). Depending on the updating scheme, we determine the complexity class (NC, P, NP, PSPACE) where the prediction problem belongs. (C) 2014 Elsevier B.V. All rights reserved.

