Allende, H., Salas, R., & Moraga, C. (2003). A robust and effective learning algorithm for feedforward neural networks based on the influence function. Lect. Notes Comput. Sc., 2652, 28–36.
Abstract: The learning process of the Feedforward Artificial Neural Networks relies on the data, though a robustness analysis of the parameter estimates of the model must be done due to the presence of outlying observations in the data. In this paper we seek the robust properties in the parameter estimates in the sense that the influence of aberrant observations or outliers in the estimate is bounded so the neural network is able to model the bulk of data. We also seek a trade off between robustness and efficiency under a Gaussian model. An adaptive learning procedure that seeks both aspects is developed. Finally we show some simulations results applied to the RESEX time series.
|
Bugedo, G., Tobar, E., Alegria, L., Oviedo, V., Arellano, D., Basoalto, R., et al. (2023). Development of mechanical ventilators in Chile. Chronicle of the initiative "Un Respiro para Chile. Rev. Med. Chile, 150(7), 958–965.
Abstract: At the beginning of the COVID-19 pandemic in Chile, in March 2020, a projection indicated that a significant group of patients with pneumonia would require admission to an Intensive Care Unit and connection to a mechanical ventilator. Therefore, a paucity of these devices and other supplies was predicted. The initiative “Un respiro para Chile” brought together many people and institutions, public and private. In the course of three months, it allowed the design and building of several ventilatory assistance devices, which could be used in critically ill patients.
|
Cisternas, J., Navarro, M., Duarte, S., & Concha, A. (2022). Equilibrium and symmetries of altitudinal magnetic rotors on a circle. Chaos, 32(12), 123120.
Abstract: Macroscopic magnets can easily be manipulated and positioned so that interactions between themselves and with external fields induce interesting dynamics and equilibrium configurations. In this work, we use rotating magnets positioned in a line or at the vertices of a regular polygon. The rotation planes of the magnets can be modified at will. The rich structure of stable and unstable configurations is dictated by symmetry and the side of the polygon. We show that both symmetric solutions and their symmetry-breaking bifurcations can be explained with group theory. Our results suggest that the predicted magnetic textures should emerge at any length scale as long as the interaction is polar, and the system is endowed with the same symmetries.
|
Fernandez, C., Valle, C., Saravia, F., & Allende, H. (2012). Behavior analysis of neural network ensemble algorithm on a virtual machine cluster. Neural Comput. Appl., 21(3), 535–542.
Abstract: Ensemble learning has gained considerable attention in different learning tasks including regression, classification, and clustering problems. One of the drawbacks of the ensemble is the high computational cost of training stages. Resampling local negative correlation (RLNC) is a technique that combines two well-known methods to generate ensemble diversity-resampling and error negative correlation-and a fine-grain parallel approach that allows us to achieve a satisfactory balance between accuracy and efficiency. In this paper, we introduce a structure of the virtual machine aimed to test diverse selection strategies of parameters in neural ensemble designs, such as RLNC. We assess the parallel performance of this approach on a virtual machine cluster based on the full virtualization paradigm, using speedup and efficiency as performance metrics, for different numbers of processors and training data sizes.
|
Goles, E., & Palacios, A. G. (2007). Dynamical complexity in cognitive neural networks. Biol. Res., 40(4), 479–485.
Abstract: In the last twenty years an important effort in brain sciences, especially in cognitive science, has been the development of mathematical tool that can deal with the complexity of extensive recordings corresponding to the neuronal activity obtained from hundreds of neurons. We discuss here along with some historical issues, advantages and limitations of Artificial Neural Networks (ANN) that can help to understand how simple brain circuits work and whether ANN can be helpful to understand brain neural complexity.
|
Mellado, P. (2020). Timescales in the thermal dynamics of magnetic dipolar clusters. Phys. Rev. B, 102(21), 214442.
Abstract: The collective behavior of thermally active structures offers clues on the emergent degrees of freedom and the physical mechanisms that determine the low-energy state of a variety of systems. Here, the thermally active dynamics of magnetic dipoles at square plaquettes is modeled in terms of Brownian oscillators in contact with a heat bath. Solution of the Langevin equation for a set of interacting x-y dipoles allows the identification of the timescales and correlation length that reveal how interactions, temperature, damping, and inertia may determine the frequency modes of edge and bulk magnetic mesospins in artificial dipolar systems.
|
Salas, R., Allende, H., Moreno, S., & Saavedra, C. (2005). Flexible Architecture of Self Organizing Maps for changing environments. Lect. Notes Comput. Sc., 3773, 642–653.
Abstract: Catastrophic Interference is a well known problem of Artificial Neural Networks (ANN) learning algorithms where the ANN forget useful knowledge while learning from new data. Furthermore the structure of most neural models must be chosen in advance. In this paper we introduce a hybrid algorithm called Flexible Architecture of Self Organizing Maps (FASOM) that overcomes the Catastrophic Interference and preserves the topology of Clustered data in changing environments. The model consists in K receptive fields of self organizing maps. Each Receptive Field projects high-dimensional data of the input space onto a neuron position in a low-dimensional output space grid by dynamically adapting its structure to a specific region of the input space. Furthermore the FASOM model automatically finds the number of maps and prototypes needed to successfully adapt to the data. The model has the capability of both growing its structure when novel clusters appears and gradually forgets when the data volume is reduced in its receptive fields. Finally we show the capabilities of our model with experimental results using synthetic sequential data sets and real world data.
|