|
Arbelaez, H., Hernandez, R., & Sierra, W. (2019). Normal harmonic mappings. Mon.heft. Math., 190(3), 425–439.
Abstract: The main purpose of this paper is to study the concept of normal function in the context of harmonic mappings from the unit disk D to the complex plane. In particular, we obtain necessary conditions for a function f to be normal.
|
|
|
Canessa, E., & Chaigneau, S. E. (2020). Mathematical regularities of data from the property listing task. J. Math. Psychol., 97, 19 pp.
Abstract: To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list “is a pet”, “has four legs”, etc.), which are then coded into property types (i.e., superficially dissimilar properties such as “has four legs” and “is a quadruped” may be coded as “four legs”). When the PLT is done for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, there is a lack of a formal model of the PLT, which would provide better analysis tools. Particularly, nobody has attempted analyzing the PLT's listing process. Thus, in the current work we develop a mathematical description of the PLT. Our analyses indicate that several regularities should be found in the observable data obtained from a PLT. Using data from three different CPNs (from 3 countries and 2 different languages), we show that these regularities do in fact exist and generalize well across different CPNs. Overall, our results suggest that the description of the regularities found in PLT data may be fruitfully used in the study of concepts. (C) 2020 Elsevier Inc. All rights reserved.
|
|
|
Canessa, E., Chaigneau, S. E., Lagos, R., & Medina, F. A. (2021). How to carry out conceptual properties norming studies as parameter estimation studies: Lessons from ecology. Behav. Res. Methods, 53, 354–370.
Abstract: Conceptual properties norming studies (CPNs) ask participants to produce properties that describe concepts. From that data, different metrics may be computed (e.g., semantic richness, similarity measures), which are then used in studying concepts and as a source of carefully controlled stimuli for experimentation. Notwithstanding those metrics' demonstrated usefulness, researchers have customarily overlooked that they are only point estimates of the true unknown population values, and therefore, only rough approximations. Thus, though research based on CPN data may produce reliable results, those results are likely to be general and coarse-grained. In contrast, we suggest viewing CPNs as parameter estimation procedures, where researchers obtain only estimates of the unknown population parameters. Thus, more specific and fine-grained analyses must consider those parameters' variability. To this end, we introduce a probabilistic model from the field of ecology. Its related statistical expressions can be applied to compute estimates of CPNs' parameters and their corresponding variances. Furthermore, those expressions can be used to guide the sampling process. The traditional practice in CPN studies is to use the same number of participants across concepts, intuitively believing that practice will render the computed metrics comparable across concepts and CPNs. In contrast, the current work shows why an equal number of participants per concept is generally not desirable. Using CPN data, we show how to use the equations and discuss how they may allow more reasonable analyses and comparisons of parameter values among different concepts in a CPN, and across different CPNs.
|
|
|
Canessa, E., Chaigneau, S. E., Moreno, S., & Lagos, R. (2023). CPNCoverageAnalysis: An R package for parameter estimation in conceptual properties norming studies. Behav. Res. Methods, 55, 554–569.
Abstract: In conceptual properties norming studies (CPNs), participants list properties that describe a set of concepts. From CPNs, many different parameters are calculated, such as semantic richness. A generally overlooked issue is that those values are
only point estimates of the true unknown population parameters. In the present work, we present an R package that allows us to treat those values as population parameter estimates. Relatedly, a general practice in CPNs is using an equal number of participants who list properties for each concept (i.e., standardizing sample size). As we illustrate through examples, this procedure has negative effects on data�s statistical analyses. Here, we argue that a better method is to standardize coverage (i.e., the proportion of sampled properties to the total number of properties that describe a concept), such that a similar coverage is achieved across concepts. When standardizing coverage rather than sample size, it is more likely that the set of concepts in a CPN all exhibit a similar representativeness. Moreover, by computing coverage the researcher can decide whether the
CPN reached a sufficiently high coverage, so that its results might be generalizable to other studies. The R package we make available in the current work allows one to compute coverage and to estimate the necessary number of participants to reach a target coverage. We show this sampling procedure by using the R package on real and simulated CPN data.
|
|
|
Canessa, E., Chaigneau, S.E, Moreno, S. (2023). Describing and understanding the time course of the Property Listing Task. Cogn. Process., Early Access.
Abstract: To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list �is a pet�, �has four legs�, etc.). When PLT data is collected for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, only recently a model that describes the listing course of a PLT has been developed and validated. That original model describes the listing course using order of production of properties. Here we go a step beyond and validate the model using response times (RT), i.e., the time from cue onset to property listing. Our results show that RT data exhibits the same regularities observed in the previous model, but now we can also analyze the time course, i.e., dynamics of the PLT. As such, the RT validated model may be applied to study several similar memory retrieval tasks, such as the Free Listing Task, Verbal Fluidity Task, and to examine related cognitive processes. To illustrate those kinds of analyses, we present a brief example of the difference in PLT�s dynamics between listing properties for abstract versus concrete concepts, which shows that the model may be fruitfully applied to study concepts.
|
|
|
Chaigneau, S. E., Canessa, E., Barra, C., & Lagos, R. (2018). The role of variability in the property listing task. Behav. Res. Methods, 50(3), 972–988.
Abstract: It is generally believed that concepts can be characterized by their properties (or features). When investigating concepts encoded in language, researchers often ask subjects to produce lists of properties that describe them (i.e., the Property Listing Task, PLT). These lists are accumulated to produce Conceptual Property Norms (CPNs). CPNs contain frequency distributions of properties for individual concepts. It is widely believed that these distributions represent the underlying semantic structure of those concepts. Here, instead of focusing on the underlying semantic structure, we aim at characterizing the PLT. An often disregarded aspect of the PLT is that individuals show intersubject variability (i.e., they produce only partially overlapping lists). In our study we use a mathematical analysis of this intersubject variability to guide our inquiry. To this end, we resort to a set of publicly available norms that contain information about the specific properties that were informed at the individual subject level. Our results suggest that when an individual is performing the PLT, he or she generates a list of properties that is a mixture of general and distinctive properties, such that there is a non-linear tendency to produce more general than distinctive properties. Furthermore, the low generality properties are precisely those that tend not to be repeated across lists, accounting in this manner for part of the intersubject variability. In consequence, any manipulation that may affect the mixture of general and distinctive properties in lists is bound to change intersubject variability. We discuss why these results are important for researchers using the PLT.
|
|
|
Chuaqui, M., & Hernandez, R. (2023). Families of homomorphic mappings in the polydisk. Complex Var. Elliptic. Equ., Early Access.
Abstract: We study classes of locally biholomorphic mappings defined in the polydisk P-n that have bounded Schwarzian operator in the Bergman metric. We establish important properties of specific solutions of the associated system of differential equations, and show a geometric connection between the order of the classes and a covering property. We show for modified and slightly larger classes that the order is Lipschitz continuous with respect to the bound on the Schwarzian, and use this to estimate the order of the original classes.
|
|
|
de la Cruz, R., Salinas, H. S., & Meza, C. (2022). Reliability Estimation for Stress-Strength Model Based on Unit-Half-Normal Distribution. Symmetry, 14(4), 837.
Abstract: Many lifetime distribution models have successfully served as population models for risk analysis and reliability mechanisms. We propose a novel estimation procedure of stress-strength reliability in the case of two independent unit-half-normal distributions can fit asymmetrical data with either positive or negative skew, with different shape parameters. We obtain the maximum likelihood estimator of the reliability, its asymptotic distribution, and exact and asymptotic confidence intervals. In addition, confidence intervals of model parameters are constructed by using bootstrap techniques. We study the performance of the estimators based on Monte Carlo simulations, the mean squared error, average bias and length, and coverage probabilities. Finally, we apply the proposed reliability model in data analysis of burr measurements on the iron sheets.
|
|
|
Faes, M. G. R., & Valdebenito, M. A. (2021). Fully decoupled reliability-based optimization of linear structures subject to Gaussian dynamic loading considering discrete design variables. Mech. Syst. Sig. Process., 156, 107616.
Abstract: Reliability-based optimization (RBO) offers the possibility of finding an optimal design for a system according to a prescribed criterion while explicitly taking into account the effects of uncertainty. However, due to the necessity of solving simultaneously a reliability problem nested in an optimization procedure, the corresponding computational cost is usually high, impeding the applicability of the methods. This computational cost is even further enlarged when one or several design variables must belong to a discrete set, due to the requirement of resorting to integer programming optimization algorithms. To alleviate this issue, this contribution proposes a fully decoupled approach for a specific class of problems, namely minimization of the failure probability of a linear system subjected to an uncertain dynamic load of the Gaussian type, under the additional constraint that the design variables are integer-valued. Specifically, by using the operator norm framework, as developed by the authors in previous work, this paper shows that by reducing the RBO problem with discrete design variables to the solution of a single deterministic optimization problem followed by a single reliability analysis, a large gain in numerical efficiency can be obtained without compromising the accuracy of the resulting optimal design. The application and capabilities of the proposed approach are illustrated by means of three examples.
|
|
|
Fierro, R., Leiva, V., & Balakrishnan, N. (2015). Statistical Inference on a Stochastic Epidemic Model. Commun. Stat.-Simul. Comput., 44(9), 2297–2314.
Abstract: In this work, we develop statistical inference for the parameters of a discrete-time stochastic SIR epidemic model. We use a Markov chain for describing the dynamic behavior of the epidemic. Specifically, we propose estimators for the contact and removal rates based on the maximum likelihood and martingale methods, and establish their asymptotic distributions. The obtained results are applied in the statistical analysis of the basic reproduction number, a quantity that is useful in establishing vaccination policies. In order to evaluate the population size for which the results are useful, a numerical study is carried out. Finally, a comparison of the maximum likelihood and martingale estimators is conducted by means of Monte Carlo simulations.
|
|
|
Gaitan-Espitia, J. D., Bacigalupe, L. D., Opitz, T., Lagos, N. A., Timmermann, T., & Lardies, M. A. (2014). Geographic variation in thermal physiological performance of the intertidal crab Petrolisthes violaceus along a latitudinal gradient. J. Exp. Biol., 217(24), 4379–4386.
Abstract: Environmental temperature has profound effects on the biological performance and biogeographical distribution of ectothermic species. Variation of this abiotic factor across geographic gradients is expected to produce physiological differentiation and local adaptation of natural populations depending on their thermal tolerances and physiological sensitivities. Here, we studied geographic variation in whole-organism thermal physiology of seven populations of the porcelain crab Petrolisthes violaceus across a latitudinal gradient of 3000 km, characterized by a cline of thermal conditions. Our study found that populations of P. violaceus show no differences in the limits of their thermal performance curves and demonstrate a negative correlation of their optimal temperatures with latitude. Additionally, our findings show that high-latitude populations of P. violaceus exhibit broader thermal tolerances, which is consistent with the climatic variability hypothesis. Interestingly, under a future scenario of warming oceans, the thermal safety margins of P. violaceus indicate that lower latitude populations can physiologically tolerate the ocean-warming scenarios projected by the IPCC for the end of the twenty-first century.
|
|
|
Garcia-Papani, F., Uribe-Opazo, M. A., Leiva, V., & Aykroyd, R. G. (2017). Birnbaum-Saunders spatial modelling and diagnostics applied to agricultural engineering data. Stoch. Environ. Res. Risk Assess., 31(1), 105–124.
Abstract: Applications of statistical models to describe spatial dependence in geo-referenced data are widespread across many disciplines including the environmental sciences. Most of these applications assume that the data follow a Gaussian distribution. However, in many of them the normality assumption, and even a more general assumption of symmetry, are not appropriate. In non-spatial applications, where the data are uni-modal and positively skewed, the Birnbaum-Saunders (BS) distribution has excelled. This paper proposes a spatial log-linear model based on the BS distribution. Model parameters are estimated using the maximum likelihood method. Local influence diagnostics are derived to assess the sensitivity of the estimators to perturbations in the response variable. As illustration, the proposed model and its diagnostics are used to analyse a real-world agricultural data set, where the spatial variability of phosphorus concentration in the soil is considered-which is extremely important for agricultural management.
|
|
|
Ni, P. H., Jerez, D. J., Fragkoulis, V. C., Faes, M. G. R., Valdebenito, M. A., & Beer, M. (2022). Operator Norm-Based Statistical Linearization to Bound the First Excursion Probability of Nonlinear Structures Subjected to Imprecise Stochastic Loading. ASCE-ASME J. Risk Uncertain. Eng. Syst. A-Civ. Eng., 8(1), 04021086.
Abstract: This paper presents a highly efficient approach for bounding the responses and probability of failure of nonlinear models subjected to imprecisely defined stochastic Gaussian loads. Typically, such computations involve solving a nested double-loop problem, where the propagation of the aleatory uncertainty has to be performed for each realization of the epistemic parameters. Apart from near-trivial cases, such computation is generally intractable without resorting to surrogate modeling schemes, especially in the context of performing nonlinear dynamical simulations. The recently introduced operator norm framework allows for breaking this double loop by determining those values of the epistemic uncertain parameters that produce bounds on the probability of failure a priori. However, the method in its current form is only applicable to linear models due to the adopted assumptions in the derivation of the involved operator norms. In this paper, the operator norm framework is extended and generalized by resorting to the statistical linearization methodology to
|
|
|
Quinteros, M. J., Villena, M. J., & Villena, M. G. (2022). An evolutionary game theoretic model of whistleblowing behaviour in organizations. IMA J. Manag. Math., 33(2), 289–314.
Abstract: We present a theoretical model of corruption in organizations. Our specific focus is the role of incentives that aim to encourage whistleblowing behaviour. Corruption is modelled as a social norm of behaviour using evolutionary game theory. In particular, the dynamics of whistleblowing behaviour is captured using the replicator dynamics equation with constant and quadratic monitoring costs. We formally explore the local asymptotic stability of the equilibria. Our findings indicate that the traditional recommendations of the Beckerian approach are usually too expensive and/or unstable. We argue that an efficient mechanism for controlling corruption can be achieved by maintaining efficient salaries and imposing high rewards for whistleblowers when they detect wrongdoing. In the long term, employees can only be honest, or corrupt, or corrupt and whistleblowers; honest and whistleblowing behaviour will not coexist in the long run, since one of these two strategies is always dominated by the other.
|
|
|
Rojas, F., Wanke, P., Coluccio, G., Vega-Vargas, J., & Huerta-Canepa, G. F. (2020). Managing slow-moving item: a zero-inflated truncated normal approach for modeling demand. PeerJ Comput. Sci., 6, 22 pp.
Abstract: This paper proposes a slow-moving management method for a system using of intermittent demand per unit time and lead time demand of items in service enterprise inventory models. Our method uses zero-inflated truncated normal statistical distribution, which makes it possible to model intermittent demand per unit time using mixed statistical distribution. We conducted numerical experiments based on an algorithm used to forecast intermittent demand over fixed lead time to show that our proposed distributions improved the performance of the continuous review inventory model with shortages. We evaluated multi-criteria elements (total cost, fill-rate, shortage of quantity per cycle, and the adequacy of the statistical distribution of the lead time demand) for decision analysis using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). We confirmed that our method improved the performance of the inventory model in comparison to other commonly used approaches such as simple exponential smoothing and Croston's method. We found an interesting association between the intermittency of demand per unit of time, the square root of this same parameter and reorder point decisions, that could be explained using classical multiple linear regression model. We confirmed that the parameter of variability of the zero-inflated truncated normal statistical distribution used to model intermittent demand was positively related to the decision of reorder points. Our study examined a decision analysis using illustrative example. Our suggested approach is original, valuable, and, in the case of slow-moving item management for service companies, allows for the verification of decision-making using multiple criteria.
|
|