Home | << 1 >> |
![]() |
Canessa, E., & Chaigneau, S. E. (2020). Mathematical regularities of data from the property listing task. J. Math. Psychol., 97, 19 pp.
Abstract: To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list “is a pet”, “has four legs”, etc.), which are then coded into property types (i.e., superficially dissimilar properties such as “has four legs” and “is a quadruped” may be coded as “four legs”). When the PLT is done for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, there is a lack of a formal model of the PLT, which would provide better analysis tools. Particularly, nobody has attempted analyzing the PLT's listing process. Thus, in the current work we develop a mathematical description of the PLT. Our analyses indicate that several regularities should be found in the observable data obtained from a PLT. Using data from three different CPNs (from 3 countries and 2 different languages), we show that these regularities do in fact exist and generalize well across different CPNs. Overall, our results suggest that the description of the regularities found in PLT data may be fruitfully used in the study of concepts. (C) 2020 Elsevier Inc. All rights reserved.
|
Canessa, E., Chaigneau, S. E., Lagos, R., & Medina, F. A. (2021). How to carry out conceptual properties norming studies as parameter estimation studies: Lessons from ecology. Behav. Res. Methods, 53, 354–370.
Abstract: Conceptual properties norming studies (CPNs) ask participants to produce properties that describe concepts. From that data, different metrics may be computed (e.g., semantic richness, similarity measures), which are then used in studying concepts and as a source of carefully controlled stimuli for experimentation. Notwithstanding those metrics' demonstrated usefulness, researchers have customarily overlooked that they are only point estimates of the true unknown population values, and therefore, only rough approximations. Thus, though research based on CPN data may produce reliable results, those results are likely to be general and coarse-grained. In contrast, we suggest viewing CPNs as parameter estimation procedures, where researchers obtain only estimates of the unknown population parameters. Thus, more specific and fine-grained analyses must consider those parameters' variability. To this end, we introduce a probabilistic model from the field of ecology. Its related statistical expressions can be applied to compute estimates of CPNs' parameters and their corresponding variances. Furthermore, those expressions can be used to guide the sampling process. The traditional practice in CPN studies is to use the same number of participants across concepts, intuitively believing that practice will render the computed metrics comparable across concepts and CPNs. In contrast, the current work shows why an equal number of participants per concept is generally not desirable. Using CPN data, we show how to use the equations and discuss how they may allow more reasonable analyses and comparisons of parameter values among different concepts in a CPN, and across different CPNs.
|
Canessa, E., Chaigneau, S. E., & Marchant, N. (2023). Use of Agent-Based Modeling (ABM) in Psychological Research. In Trends and Challenges in Cognitive Modeling (Vol. Early Access, pp. 7–20).
Abstract: In this chapter, we introduce the general use of agent-based modeling (ABM) in social science studies and in particular in psychological research. Given that ABM is frequently used in many disciplines in social sciences, as the main research tool or in conjunction with other modeling approaches, it is rather surprising its infrequent use in psychology. There are many reasons for that infrequent use of ABM in psychology, some justified, but others stem from not knowing the potential benefits of applying ABM to psychological research. Thus, we begin by giving a brief overview of ABM and the stages one has to go through to develop and analyze such a model. Then, we present and discuss the general drawbacks of ABM and the ones specific to psychology. Through that discussion, the reader should be able to better assess whether those disadvantages are sufficiently strong for precluding the application of ABM to his/her research. Finally, we end up by stating the benefits of ABM and examining how those advantages may outweigh the potential drawbacks, thus making ABM a valuable tool to consider in psychological research.
|
Canessa, E., Chaigneau, S. E., & Moreno, S. (2021). Language processing differences between blind and sighted individuals and the abstract versus concrete concept difference. Cogn. Sci., 45(10), e13044.
Abstract: In the Property Listing Task (PLT), participants are asked to list properties for a concept (e.g., for the concept dog, “barks” and “is a pet” may be produced). In Conceptual Property Norming studies (CPNs), participants are asked to list properties for large sets of concepts. Here, we use a mathematical model of the property listing process to explore two longstanding issues: characterizing the difference between concrete and abstract concepts, and characterizing semantic knowledge in the blind versus sighted population. When we apply our mathematical model to a large CPN reporting properties listed by sighted and blind participants, the model uncovers significant differences between concrete and abstract concepts. Though we also find that blind individuals show many of the same processing differences between abstract and concrete concepts found in sighted individuals, our model shows that those differences are noticeably less pronounced than in sighted individuals. We discuss our results vis a vis theories attempting to
characterize abstract concepts. Keywords: Concrete concepts; Abstract concepts; Blind subjects; Sighted Subjects
|
Canessa, E., Chaigneau, S. E., & Moreno, S. (2022). Using agreement probability to study differences in types of concepts and conceptualizers. Behav. Res. Methods, Early Access.
Abstract: Agreement probability p(a) is a homogeneity measure of lists of properties produced by participants in a Property Listing Task (PLT) for a concept. Agreement probability's mathematical properties allow a rich analysis of property-based descriptions. To illustrate, we use p(a) to delve into the differences between concrete and abstract concepts in sighted and blind populations. Results show that concrete concepts are more homogeneous within sighted and blind groups than abstract ones (i.e., exhibit a higher p(a) than abstract ones) and that concrete concepts in the blind group are less homogeneous than in the sighted sample. This supports the idea that listed properties for concrete concepts should be more similar across subjects due to the influence of visual/perceptual information on the learning process. In contrast, abstract concepts are learned based mainly on social and linguistic information, which exhibit more variability among people, thus, making the listed properties more dissimilar across subjects. Relative to abstract concepts, the difference in p(a) between sighted and blind is not statistically significant. Though this is a null result, and should be considered with care, it is expected because abstract concepts should be learned by paying attention to the same social and linguistic input in both, blind and sighted, and thus, there is no reason to expect that the respective lists of properties should differ. Finally, we used p(a) to classify concrete and abstract concepts with a good level of certainty. All these analyses suggest that p(a) can be fruitfully used to study data obtained in a PLT.
|
Canessa, E., Chaigneau, S. E., Moreno, S., & Lagos, R. (2020). Informational content of cosine and other similarities calculated from high-dimensional Conceptual Property Norm data. Cogn. Process., 21, 601–614.
Abstract: To study concepts that are coded in language, researchers often collect lists of conceptual properties produced by human subjects. From these data, different measures can be computed. In particular, inter-concept similarity is an important variable used in experimental studies. Among possible similarity measures, the cosine of conceptual property frequency vectors seems to be a de facto standard. However, there is a lack of comparative studies that test the merit of different similarity measures when computed from property frequency data. The current work compares four different similarity measures (cosine, correlation, Euclidean and Chebyshev) and five different types of data structures. To that end, we compared the informational content (i.e., entropy) delivered by each of those 4 x 5 = 20 combinations, and used a clustering procedure as a concrete example of how informational content affects statistical analyses. Our results lead us to conclude that similarity measures computed from lower-dimensional data fare better than those calculated from higher-dimensional data, and suggest that researchers should be more aware of data sparseness and dimensionality, and their consequences for statistical analyses.
|
Canessa, E., Chaigneau, S. E., Moreno, S., & Lagos, R. (2023). CPNCoverageAnalysis: An R package for parameter estimation in conceptual properties norming studies. Behav. Res. Methods, 55, 554–569.
Abstract: In conceptual properties norming studies (CPNs), participants list properties that describe a set of concepts. From CPNs, many different parameters are calculated, such as semantic richness. A generally overlooked issue is that those values are
only point estimates of the true unknown population parameters. In the present work, we present an R package that allows us to treat those values as population parameter estimates. Relatedly, a general practice in CPNs is using an equal number of participants who list properties for each concept (i.e., standardizing sample size). As we illustrate through examples, this procedure has negative effects on data�s statistical analyses. Here, we argue that a better method is to standardize coverage (i.e., the proportion of sampled properties to the total number of properties that describe a concept), such that a similar coverage is achieved across concepts. When standardizing coverage rather than sample size, it is more likely that the set of concepts in a CPN all exhibit a similar representativeness. Moreover, by computing coverage the researcher can decide whether the CPN reached a sufficiently high coverage, so that its results might be generalizable to other studies. The R package we make available in the current work allows one to compute coverage and to estimate the necessary number of participants to reach a target coverage. We show this sampling procedure by using the R package on real and simulated CPN data. |
Canessa, E. C., & Chaigneau, S. E. (2016). When are concepts comparable across minds? Qual. Quant., 50(3), 1367–1384.
Abstract: In communication, people cannot resort to direct reference (e.g., pointing) when using diffuse concepts like democracy. Given that concepts reside in individuals' minds, how can people share those concepts? We argue that concepts are comparable across a social group if they afford agreement for those who use it; and that agreement occurs whenever individuals receive evidence that others conceptualize a given situation similarly to them. Based on Conceptual Agreement Theory, we show how to compute an agreement probability based on the sets of properties belonging to concepts. If that probability is sufficiently high, this shows that concepts afford an adequate level of agreement, and one may say that concepts are comparable across individuals' minds. In contrast to other approaches, our method considers that inter-individual variability in naturally occurring conceptual content exists and is a fact that must be taken into account, whereas other theories treat variability as error that should be cancelled out. Given that conceptual variability will exist, our approach may establish whether concepts are comparable across individuals' minds more soundly than previous methods.
|
Chaigneau, S. E., Canessa, E., Barra, C., & Lagos, R. (2018). The role of variability in the property listing task. Behav. Res. Methods, 50(3), 972–988.
Abstract: It is generally believed that concepts can be characterized by their properties (or features). When investigating concepts encoded in language, researchers often ask subjects to produce lists of properties that describe them (i.e., the Property Listing Task, PLT). These lists are accumulated to produce Conceptual Property Norms (CPNs). CPNs contain frequency distributions of properties for individual concepts. It is widely believed that these distributions represent the underlying semantic structure of those concepts. Here, instead of focusing on the underlying semantic structure, we aim at characterizing the PLT. An often disregarded aspect of the PLT is that individuals show intersubject variability (i.e., they produce only partially overlapping lists). In our study we use a mathematical analysis of this intersubject variability to guide our inquiry. To this end, we resort to a set of publicly available norms that contain information about the specific properties that were informed at the individual subject level. Our results suggest that when an individual is performing the PLT, he or she generates a list of properties that is a mixture of general and distinctive properties, such that there is a non-linear tendency to produce more general than distinctive properties. Furthermore, the low generality properties are precisely those that tend not to be repeated across lists, accounting in this manner for part of the intersubject variability. In consequence, any manipulation that may affect the mixture of general and distinctive properties in lists is bound to change intersubject variability. We discuss why these results are important for researchers using the PLT.
|
Chaigneau, S. E., Canessa, E., & Gaete, J. (2012). Conceptual agreement theory. New Ideas Psychol., 30(2), 179–189.
Abstract: For some time now, psychological inquiry on reference has assumed that reference is achieved through causal links between words and entities (i.e., direct reference). In this view, meaning is not relevant for reference or co-reference. We argue that this view may be germane to concrete objects, but not to diffuse objects (that lack clear spatio-temporal limits, thus preventing the use of direct reference in interactions). Here, we propose that meaning is the relevant dimension when referring to diffuse entities, and introduce Conceptual Agreement Theory (CAT). CAT is a mathematized theory of meaning that specifies the conditions under which two individuals (or one individual at two points in time) will infer they share a diffuse referent. We present the theory, and use stereotype stability and public opinion as case studies to illustrate the theory's use and scope. (C) 2011 Elsevier Ltd. All rights reserved.
Keywords: Reference; Shared reference; Meaning; Agreement; Joint action
|
Chaigneau, S. E., Canessa, E., Lenci, A., & Devereux, B. (2020). Eliciting semantic properties: methods and applications. Cogn. Process., 21(4), 583–586.
Abstract: Asking subjects to list semantic properties for concepts is essential for predicting performance in several linguistic and non-linguistic tasks and for creating carefully controlled stimuli for experiments. The property elicitation task and the ensuing norms are widely used across the field, to investigate the organization of semantic memory and design computational models thereof. The contributions of the current Special Topic discuss several core issues concerning how semantic property norms are constructed and how they may be used for research aiming at understanding cognitive processing.
|
Chaigneau, S. E., Marchant, N., Canessa, E., & Aldunate, N. (2024). A mathematical model of semantic access in lexical and semantic decisions. Lang. Cogn., Early Access.
Abstract: In this work, we use a mathematical model of the property listing task dynamics and test its ability to predict processing time in semantic and lexical decision tasks. The study aims at exploring the temporal dynamics of semantic access in these tasks and showing that the mathematical model captures essential aspects of semantic access, beyond the original task for which it was developed. In two studies using the semantic and lexical decision tasks, we used the mathematical model's coefficients to predict reaction times. Results showed that the model was able to predict processing time in both tasks, accounting for an independent portion of the total variance, relative to variance predicted by traditional psycholinguistic variables (i.e., frequency, familiarity, concreteness imageability). Overall, this study provides evidence of the mathematical model's validity and generality, and offers insights regarding the characterization of concrete and abstract words.
|
Chaigneau, S. E., Puebla, G., & Canessa, E. C. (2016). Why the designer's intended function is central for proper function assignment and artifact conceptualization: Essentialist and normative accounts. Dev. Rev., 41, 38–50.
Abstract: People tend to think that the function intended by an artifact's designer is its real or proper function. Relatedly, people tend to classify artifacts according to their designer's intended function (DIF), as opposed to an alternative opportunistic function. This centrality of DIF has been shown in children from 6 years of age to adults, and it is not restricted to Western societies. We review four different explanations for the centrality of DIF, integrating developmental and adult data. Two of these explanations are essentialist accounts (causal and intentional essentialism). Two of them are normative accounts (conventional function and idea ownership). Though essentialist accounts have been very influential, we review evidence that shows their limitations. Normative accounts have been less predominant. We review evidence to support them, and discuss how they account for the data. In particular, we review evidence suggesting that the centrality of DIF can be explained as a case of idea ownership. This theory makes sense of a great deal of the existing data on the subject, reconciles contradictory results, links this line of work to other literatures, and offers an account of the observed developmental trend. (C) 2016 Elsevier Inc. All rights reserved.
Keywords: Artifacts; Function; Design; Essentialism; Ownership
|
Lagos, R., Canessa, E., & Chaigneau, S. E. (2019). Modeling stereotypes and negative self-stereotypes as a function of interactions among groups with power asymmetries. J. Theory Soc. Behav., 49(3), 312–333.
Abstract: Stereotypes is one of the most researched topics in social psychology. Within this context, negative self-stereotypes pose a particular challenge for theories. In the current work, we propose a model that suggests that negative self-stereotypes can theoretically be accounted for by the need to communicate in a social system made up by groups with unequal power. Because our theory is dynamic, probabilistic, and interactionist, we use a computational simulation technique to show that the proposed model is able to reproduce the phenomenon of interest, to provide novel accounts of related phenomena, and to suggest novel empirical predictions. We describe our computational model, our variables' dynamic behavior and interactions, and link our analyses to the literature on stereotypes and self-stereotypes, the stability of stereotypes (in particular, gender and racial stereotypes), the effects of power asymmetries, and the effects of intergroup contact.
Keywords: negative self-; stereotypes; agent based simulation; social power; stereotypes
|
Marchant, N., Canessa, E., & Chaigneau, S. E. (2022). An adaptive linear filter model of procedural category learning. Cogn. Process., 23(3), 393–405.
Abstract: We use a feature-based association model to fit grouped and individual level category learning and transfer data. The model assumes that people use corrective feedback to learn individual feature to categorization-criterion correlations and combine those correlations additively to produce classifications. The model is an Adaptive Linear Filter (ALF) with logistic output function and Least Mean Squares learning algorithm. Categorization probabilities are computed by a logistic function. Our data span over 31 published data sets. Both at grouped and individual level analysis levels, the model performs remarkably well, accounting for large amounts of available variances. When fitted to grouped data, it outperforms alternative models. When fitted to individual level data, it is able to capture learning and transfer performance with high explained variances. Notably, the model achieves its fits with a very minimal number of free parameters. We discuss the ALF's advantages as a model of procedural categorization, in terms of its simplicity, its ability to capture empirical trends and its ability to solve challenges to other associative models. In particular, we discuss why the model is not equivalent to a prototype model, as previously thought.
|
Marchant, N., Canessa, E., & Chaigneau, S. E. (2023). Challenges from Probabilistic Learning for Models of Brain and Behavior. In Trends and Challenges in Cognitive Modeling (Vol. Early Access, pp. 73–84).
Abstract: Probabilistic learning is a research program that aims to understand how animals and humans learn and adapt their behavior in situations where the pairing between cues and outcomes is not always completely reliable. This chapter provides an overview of the challenges of probabilistic learning for models of the brain and behavior. We discuss the historical background of probabilistic learning, its theoretical foundations, and its applications in various fields such as psychology, neuroscience, and artificial intelligence. We also review some key findings from experimental studies on probabilistic learning, including the role of feedback, attention, memory, and decision-making processes. Finally, we highlight some of the current debates and future directions in this field.
|
Ramos, D., Moreno, S., Canessa, E., Chaigneau, S. E., & Marchant, N. (2023). AC-PLT: An algorithm for computer-assisted coding of semantic property listing data. Behav. Res. Methods, Early Access.
Abstract: In this paper, we present a novel algorithm that uses machine learning and natural language processing techniques to facilitate the coding of feature listing data. Feature listing is a method in which participants are asked to provide a list of features that are typically true of a given concept or word. This method is commonly used in research studies to gain insights into people's understanding of various concepts. The standard procedure for extracting meaning from feature listings is to manually code the data, which can be time-consuming and prone to errors, leading to reliability concerns. Our algorithm aims at addressing these challenges by automatically assigning human-created codes to feature listing data that achieve a quantitatively good agreement with human coders. Our preliminary results suggest that our algorithm has the potential to improve the efficiency and accuracy of content analysis of feature listing data. Additionally, this tool is an important step toward developing a fully automated coding algorithm, which we are currently preliminarily devising.
|