Informational content of cosine and other similarities calculated from high-dimensional Conceptual Property Norm data
Canessa
E
author
Chaigneau
S
E
author
Moreno
S
author
Lagos
R
author
2020
English
To study concepts that are coded in language, researchers often collect lists of conceptual properties produced by human subjects. From these data, different measures can be computed. In particular, inter-concept similarity is an important variable used in experimental studies. Among possible similarity measures, the cosine of conceptual property frequency vectors seems to be a de facto standard. However, there is a lack of comparative studies that test the merit of different similarity measures when computed from property frequency data. The current work compares four different similarity measures (cosine, correlation, Euclidean and Chebyshev) and five different types of data structures. To that end, we compared the informational content (i.e., entropy) delivered by each of those 4 x 5 = 20 combinations, and used a clustering procedure as a concrete example of how informational content affects statistical analyses. Our results lead us to conclude that similarity measures computed from lower-dimensional data fare better than those calculated from higher-dimensional data, and suggest that researchers should be more aware of data sparseness and dimensionality, and their consequences for statistical analyses.
Cosine similarity
Euclidean distance
Chebyshev distance
Clustering
Conceptual properties
WOS:000546845700001
exported from refbase (http://ficpubs.uai.cl/show.php?record=1180), last updated on Sat, 01 Aug 2020 14:50:19 +0000
text
10.1007/s10339-020-00985-5
Canessa_etal2020
Cognitive Processing
Cogn. Process.
2020
Springer Heidelberg
continuing
periodical
academic journal
to appear
14 pp
1612-4782