Allende, H., Salas, R., & Moraga, C. (2003). A robust and effective learning algorithm for feedforward neural networks based on the influence function. Lect. Notes Comput. Sc., 2652, 28–36.
Abstract: The learning process of the Feedforward Artificial Neural Networks relies on the data, though a robustness analysis of the parameter estimates of the model must be done due to the presence of outlying observations in the data. In this paper we seek the robust properties in the parameter estimates in the sense that the influence of aberrant observations or outliers in the estimate is bounded so the neural network is able to model the bulk of data. We also seek a trade off between robustness and efficiency under a Gaussian model. An adaptive learning procedure that seeks both aspects is developed. Finally we show some simulations results applied to the RESEX time series.
|
Alvarenga, T. C., De Lima, R. R., Simao, S. D., Junior, L. C. B., Bueno, J. S. D., Alvarenga, R. R., et al. (2022). Ensemble of hybrid Bayesian networks for predicting the AMEn of broiler feedstuffs. Comput. Electron. Agric., 198, 107067.
Abstract: To adequately meet the nutritional needs of broilers, it is necessary to know the values of apparent metabolizable energy corrected by the nitrogen balance (AMEn) of the feedstuffs. To determine AMEn values, biological assays, feedstuff composition tables, or prediction equations are used as a function of the chemical composition of feedstuffs, the latter using statistical methodologies such as multiple linear regression, neural networks, and Bayesian networks (BN). BN is a statistical and computational methodology that consists of graphical (graph) and probabilistic models of quantitative and/or qualitative variables. Ensembles of BN in the area of broiler nutrition are expected, as there is no research showing their AMEn prediction performance. The purpose of this article is to propose and use ensembles of hybrid Bayesian networks (EHBNs) and obtain prediction equations for the AMEn of feedstuffs used in broiler nutrition from their chemical compositions. We trained 100, 1,000, and 10,000 EHBN, and in this way, empirical distributions were found for the coefficients of the covariates (crude protein, ether extract, mineral matter, and crude fiber). Thus, the mean, median, and mode of these distributions were calculated to build prediction equations for AMEn. It is observed that the method for obtaining the coefficients of the covariates discussed in this article is an unprecedented proposal in the field of broiler nutrition. The data used to obtain the equations were obtained by meta-analysis, and the data for the validation of the equations were obtained from metabolic tests. The proposed equations were evaluated by precision measures such as the mean square error (MSE), mean absolute deviation (MAD), and mean absolute percentage error (MAPE). The best equations for predicting AMEn were derived from the mean or mode coefficients for the 10,000 EHBN results. In conclusion, the methodology used is a good tool to obtain prediction equations for AMEn as a function of the chemical composition of their feedstuffs. The coefficients were found to differ from those found by other methodologies, such as the usual neural network or multiple linear regressions. The field of broiler nutrition contributed with new equations and with a never-applied methodology and differentiated in obtaining its coefficients by empirical distributions.
|
Alvarez-Miranda, E., & Pereira, J. (2017). Designing and constructing networks under uncertainty in the construction stage: Definition and exact algorithmic approach. Comput. Oper. Res., 81, 178–191.
Abstract: The present work proposes a novel Network Optimization problem whose core is to combine both network design and network construction scheduling under uncertainty into a single two-stage robust optimization model. The first-stage decisions correspond to those of a classical network design problem, while the second-stage decisions correspond to those of a network construction scheduling problem (NCS) under uncertainty. The resulting problem, which we will refer to as the Two-Stage Robust Network Design and Construction Problem (2SRNDC), aims at providing a modeling framework in which the design decision not only depends on the design costs (e.g., distances) but also on the corresponding construction plan (e.g., time to provide service to costumers). We provide motivations, mixed integer programming formulations, and an exact algorithm for the 2SRNDC. Experimental results on a large set of instances show the effectiveness of the model in providing robust solutions, and the capability of the proposed algorithm to provide good solutions in reasonable running times. (C) 2017 Elsevier Ltd. All rights reserved.
|
Alvarez-Miranda, E., & Pereira, J. (2019). On the complexity of assembly line balancing problems. Comput. Oper. Res., 108, 182–186.
Abstract: Assembly line balancing is a family of combinatorial optimization problems that has been widely studied in the literature due to its simplicity and industrial applicability. Most line balancing problems are NP-hard as they subsume the bin packing problem as a special case. Nevertheless, it is common in the line balancing literature to cite [A. Gutjahr and G. Nemhauser, An algorithm for the line balancing problem, Management Science 11 (1964) 308-315] in order to assess the computational complexity of the problem. Such an assessment is not correct since the work of Gutjahr and Nemhauser predates the concept of NP-hardness. This work points at over 50 publications since 1995 with the aforesaid error. (C) 2019 Elsevier Ltd. All rights reserved.
|
Alvarez-Miranda, E., Pereira, J., & Vila, M. (2023). Analysis of the simple assembly line balancing problem complexity. Comput. Oper. Res., 159, 106323.
Abstract: The simple assembly line balancing problem (SALBP) involves the determination of the assignment of elementary assembly operations to the workstations of the assembly line for the manufacture of a final product, with the objective of maximising assembly efficiency. In addition to its practicality, the SALBP can be considered as an extension of the bin packing problem (BPP) to account for the precedence relations between items. These constraints introduce an ordering component to the problem, which increases the complexity of SALBP resolution. However, previous studies indicated that precedence constraints do not play an important role in the capacity of state-of-the-art procedures to solve benchmark instances to optimality. In this study, we analysed the influences of different features of an SALBP instance on the performance of state-of-the-art solution methods for the abovementioned problem. First, we provide an alternative proof of complexity for the SALBP that uses precedence constraints to demonstrate its non-deterministic polynomial time (NP)-complete status, followed by a new set of benchmark instances directed towards an empirical analysis of the different features of SALBP instances. The experimental results revealed that the packing features of the SALBP are a major source of the perceived difficulty for any instance; however, precedence constraints play a role in the performance of these solution procedures. Specifically, the number of precedence constraints plays an important role in the results obtained from state-of-the-art methods. In addition to the analysis, certain issues that were identified in the publicly available implementations of the state-of-the-art method for resolving this problem were addressed in this study.
|
Araujo, J., Ducoffe, G., Nisse, N., & Suchan, K. (2018). On interval number in cycle convexity. Discret. Math. Theor. Comput. Sci., 20(1), 35 pp.
Abstract: Recently, Araujo et al. [Manuscript in preparation, 2017] introduced the notion of Cycle Convexity of graphs. In their seminal work, they studied the graph convexity parameter called hull number for this new graph convexity they proposed, and they presented some of its applications in Knot theory. Roughly, the tunnel number of a knot embedded in a plane is upper bounded by the hull number of a corresponding planar 4-regular graph in cycle convexity. In this paper, we go further in the study of this new graph convexity and we study the interval number of a graph in cycle convexity. This parameter is, alongside the hull number, one of the most studied parameters in the literature about graph convexities. Precisely, given a graph G, its interval number in cycle convexity, denoted by in(cc)(G), is the minimum cardinality of a set S subset of V (G) such that every vertex w is an element of E V (G) \ S has two distinct neighbors u, v is an element of S such that u and v lie in same connected component of G[S], i.e. the subgraph of G induced by the vertices in S. In this work, first we provide bounds on in(cc) (G) and its relations to other graph convexity parameters, and explore its behaviour on grids. Then, we present some hardness results by showing that deciding whetherin(cc) (G) <= k is NP-complete, even if G is a split graph or a bounded-degree planar graph, and that the problem is W[2]-hard in bipartite graphs when k is the parameter. As a consequence, we obtain that in(cc) (G) cannot be approximated up to a constant factor in the classes of split graphs and bipartite graphs (unless P = NP). On the positive side, we present polynomial-time algorithms to compute in(cc) (G) for outerplanar graphs, cobipartite graphs and interval graphs. We also present fixed-parameter tractable (FPT) algorithms to compute it for (q, q – 4)-graphs when q is the parameter and for general graphs G when parameterized either by the treewidth or the neighborhood diversity of G. Some of our hardness results and positive results are not known to hold for related graph convexities and domination problems. We hope that the design of our new reductions and polynomial-time algorithms can be helpful in order to advance in the study of related graph problems.
|
Atkinson, J., & Maurelia, A. (2017). Redundancy-Based Trust in Question-Answering Systems. Computer, 50(1), 58–65.
Abstract: By combining user preferences, redundancy analysis, and trust-network inference, the proposed trust model can augment candidate answers with information about target sources on the basis of connections with other web users and sources. Experiments show that the model is more effective overall than trust analyses based on inference alone.
|
Averbakh, I., & Pereira, J. (2021). Tree optimization based heuristics and metaheuristics in network construction problems. Comput. Oper. Res., 128, 105190.
Abstract: We consider a recently introduced class of network construction problems where edges of a transportation network need to be constructed by a server (construction crew). The server has a constant construction speed which is much lower than its travel speed, so relocation times are negligible with respect to construction times. It is required to find a construction schedule that minimizes a non-decreasing function of the times when various connections of interest become operational. Most problems of this class are strongly NP-hard on general networks, but are often tree-efficient, that is, polynomially solvable on trees. We develop a generic local search heuristic approach and two metaheuristics (Iterated Local Search and Tabu Search) for solving tree-efficient network construction problems on general networks, and explore them computationally. Results of computational experiments indicate that the methods have excellent performance.
|
Canals, C., Maroulis, S., Canessa, E., Chaigneau, S., & Mizala, A. (2022). Mechanisms Underlying Choice-Set Formation: The Case of School Choice in Chile. Soc. Sci. Comput. Rev., Early Access.
Abstract: Many decisions involve selecting among many more options than an individual can effectively examine and consider. Therefore, people usually consider smaller and different “choice sets” as viable options. To better understand the processes affecting choice-set formation, we developed a computational model of how households become aware of potential choices in a context for which understanding household decision-making has important public policy implications: market-based reforms in education. In the model, households learn about the schools to which they can send their children through three mechanisms: find out about geographically proximate schools, access to publicly available information, and information gathered from interactions with other households. We calibrated the model using data from four cities in Chile, where students are not required to attend their neighborhood school. We then used the model to conduct hypothetical computational experiments that assessed how each mechanism impacted the sets of schools known to households before they make their choice (their “awareness set”). We found that the inclusion of a social interaction mechanism was crucial for producing simulated awareness sets that matched the awareness sets provided in a survey conducted by the Chilean Ministry of Education. We also found that the social interaction mechanism played the largest role in determining the quality and price range of the choices available in households’ awareness sets. Our findings highlight the importance of social interactions in a stage of decision-making before the direct impact of other individuals is typically made explicit. Moreover, it validates an approach that can be used in future models where understanding how decision-makers become aware of their options may be as important as the way they choose among them.
|
Carrasco, R. A., Pruhs, K., Stein, C., & Verschae, J. (2018). The Online Set Aggregation Problem. In Lecture Notes in Computer Sciences (Vol. 10807, pp. 245–259).
|
Casalino, G., Castellano, G., Hryniewicz, O., Leite, D., Opara, K., Radziszewska, W., et al. (2023). Semi-Supervised vs. Supervised Learning for Mental Health Monitoring: A Case Study on Bipolar Disorder. Int. J. Appl. Math. Comput. Sci., 33(3), 419–428.
Abstract: Acoustic features of speech are promising as objective markers for mental health monitoring. Specialized smartphone apps can gather such acoustic data without disrupting the daily activities of patients. Nonetheless, the psychiatric assessment of the patient's mental state is typically a sporadic occurrence that takes place every few months. Consequently, only a slight fraction of the acoustic data is labeled and applicable for supervised learning. The majority of the related work on mental health monitoring limits the considerations only to labeled data using a predefined ground-truth period. On the other hand, semi-supervised methods make it possible to utilize the entire dataset, exploiting the regularities in the unlabeled portion of the data to improve the predictive power of a model. To assess the applicability of semi-supervised learning approaches, we discuss selected state-of-the-art semi-supervised classifiers, namely, label spreading, label propagation, a semi-supervised support vector machine, and the self training classifier. We use real-world data obtained from a bipolar disorder patient to compare the performance of the different methods with that of baseline supervised learning methods. The experiment shows that semi-supervised learning algorithms can outperform supervised algorithms in predicting bipolar disorder episodes.
|
Chang, Q., Zhou, C. C., Valdebenito, M. A., Liu, H. W., & Yue, Z. F. (2022). A novel sensitivity index for analyzing the response of numerical models with interval inputs. Comput. Methods in Appl. Mech. Eng., 400, 115509.
Abstract: This study proposes a novel sensitivity index to provide essential insights into numerical models whose inputs are characterized by intervals. Based on the interval model and its normalized form, the interval processes are introduced to define a new sensitivity index. The index can represent the individual or joint influence of the interval inputs on the output of a considered model. A double-loop strategy, based on global metamodeling and optimization, is established to calculate the index. Subsequently, the proposed index is theoretically compared with two other existing indices, and it is experimentally applied to three numerical examples and a practical engineering problem of a honeycomb sandwich radome. The results indicate that the proposed index is an effective tool for interval sensitivity analysis.
|
de Figueiredo, C. M. H., de Mello, C. P., & Ortiz, C. (2000). Edge colouring reduced indifference graphs. Lect. Notes Comput. Sc., 1776, 145–153.
Abstract: The chromatic index problem – finding the minimum number of colours required for colouring the edges of a graph – is still unsolved for indifference graphs, whose vertices can be linearly ordered so that the vertices contained in the same maximal clique are consecutive in this order. Two adjacent vertices are twins if they belong to the same maximal cliques. A graph is reduced if it contains no pair of twin vertices. A graph is overfull if the total number of edges is greater than the product of the maximum degree by [n/2], where n is the number of vertices. We give a structural characterization for neighbourhood-over full indifference graphs proving that a reduced indifference graph cannot be neighbourhood-overfull. We show that the chromatic index for all reduced indifference graphs is the maximum degree.
|
de Figueiredo, C. M. H., Meldanis, J., de Mello, C. P., & Ortiz, C. (2003). Decompositions for the edge colouring of reduced indifference graphs. Theor. Comput. Sci., 297(1-3), 145–155.
Abstract: The chromatic index problem-finding the minimum number of colours required for colouring the edges of a graph-is still unsolved for indifference graphs, whose vertices can be linearly ordered so that the vertices contained in the same maximal clique are consecutive in this order. We present new positive evidence for the conjecture: every non neighbourhood-overfull indifference graph can be edge coloured with maximum degree colours. Two adjacent vertices are twins if they belong to the same maximal cliques. A graph is reduced if it contains no pair of twin vertices. A graph is overfull if the total number of edges is greater than the product of the maximum degree by [n/2], where n is the number of vertices. We give a structural characterization for neighbourhood-overfull indifference graphs proving that a reduced indifference graph cannot be neighbourhood-overfull. We show that the chromatic index for all reduced indifference graphs is the maximum degree. We present two decomposition methods for edge colouring reduced indifference graphs with maximum degree colours. (C) 2002 Elsevier Science B.V. All rights reserved.
|
Dölz, J., Harbrecht, H., Jerez-Hanckes, C., & Multerer M. (2022). Isogeometric multilevel quadrature for forward and inverse random acoustic scattering. Comput. Methods in Appl. Mech. Eng., 388, 114242.
Abstract: We study the numerical solution of forward and inverse time-harmonic acoustic scattering problems by randomly shaped obstacles in three-dimensional space using a fast isogeometric boundary element method. Within the isogeometric framework, realizations of the random scatterer can efficiently be computed by simply updating the NURBS mappings which represent the scatterer. This way, we end up with a random deformation field. In particular, we show that it suffices to know the deformation field’s expectation and covariance at the scatterer’s boundary to model the surface’s Karhunen–Loève expansion. Leveraging on the isogeometric framework, we employ multilevel quadrature methods to approximate quantities of interest such as the scattered wave’s expectation and variance. By computing the wave’s Cauchy data at an artificial, fixed interface enclosing the random obstacle, we can also directly infer quantities of interest in free space. Adopting the Bayesian paradigm, we finally compute the expected shape and variance of the scatterer from noisy measurements of the scattered wave at the artificial interface. Numerical results for the forward and inverse problems validate the proposed approach.
|
Escapil-Inchauspe, P., & Jerez-Hanckes, C. (2021). Bi-parametric operator preconditioning. Comput. Math. Appl., 102, 220–232.
Abstract: We extend the operator preconditioning framework Hiptmair (2006) [10] to Petrov-Galerkin methods while accounting for parameter-dependent perturbations of both variational forms and their preconditioners, as occurs when performing numerical approximations. By considering different perturbation parameters for the original form and its preconditioner, our bi-parametric abstract setting leads to robust and controlled schemes. For Hilbert spaces, we derive exhaustive linear and super-linear convergence estimates for iterative solvers, such as h-independent convergence bounds, when preconditioning with low-accuracy or, equivalently, with highly compressed approximations.
|
Ferran, S., Beghelli, A., Huerta-Canepa, G., & Jensen, F. (2018). Correctness assessment of a crowdcoding project in a computer programming introductory course. Comput. Appl. Eng. Educ., 26(1), 162–170.
Abstract: Crowdcoding is a programming model that outsources a software project implementation to the crowd. As educators, we think that crowdcoding could be leveraged as part of the learning path of engineering students from a computer programming introductory course to solve local community problems. The benefits are twofold: on the one hand the students practice the concepts learned in class and, on the other hand, they participate in real-life problems. Nevertheless, several challenges arise when developing a crowdcoding platform, the first one being how to check the correctness of student's code without giving an extra burden to the professors in the course. To overcome this issue, we propose a novel system that does not resort to expert review; neither requires knowing the right answers beforehand. The proposed scheme automatically clusters the student's codes based solely on the output they produce. Our initial results show that the largest cluster contains the same codes selected as correct by the automated and human testing, as long as some conditions apply.
|
Figueroa, A., & Atkinson, J. (2019). Dual-View Learning for Detecting Web Query Intents. Computer, 52(8), 34–42.
Abstract: Automatically categorizing user intent behind web queries is a key issue not only for improving information retrieval tasks but also for designing tailored displays based on the underlying intention. In this article, a multiview learning method is proposed to recognize the user intent behind web searches.
|
Fomin, F. V., Golovach, P. A., Kratochvil, J., Nisse, N., & Suchan, K. (2010). Pursuing a fast robber on a graph. Theor. Comput. Sci., 411(7-9), 1167–1181.
Abstract: The Cops and Robbers game as originally defined independently by Quilliot and by Nowakowski and Winkler in the 1980s has been Much Studied, but very few results pertain to the algorithmic and complexity aspects of it. In this paper we prove that computing the minimum number of cops that are guaranteed to catch a robber on a given graph is NP-hard and that the parameterized version of the problem is W[2]-hard; the proof extends to the case where the robber moves s time faster than the cops. We show that on split graphs, the problem is polynomially solvable if s = 1 but is NP-hard if s = 2. We further prove that on graphs of bounded cliquewidth the problem is polynomially solvable for s <= 2. Finally, we show that for planar graphs the minimum number of cops is unbounded if the robber is faster than the cops. (C) 2009 Elsevier B.V. All rights reserved.
|
Fraigniaud, P., Montealegre-Barba, P., Oshman, R., Rapaport, I., & Todinca, I. (2019). On Distributed Merlin-Arthur Decision Protocols. In Lecture Notes in Computer Sciences (Vol. 11639).
|