ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Advances in Psychological Science ›› 2023, Vol. 31 ›› Issue (6): 887-904.doi: 10.3724/SP.J.1042.2023.00887

• Research Method •     Next Articles

Using word embeddings to investigate human psychology: Methods and applications

BAO Han-Wu-Shuang1,2,3, WANG Zi-Xi1,2, CHENG Xi1,2, SU Zhan1,2, YANG Ying1,2, ZHANG Guang-Yao1,2,4, WANG Bo5, CAI Hua-Jian1,2()   

  1. 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
    2Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
    3Manchester China Institute, The University of Manchester, Manchester M13 9PL, United Kingdom
    4State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, and IDG/McGovern Institute for Brain Research, Beijing 100875, China
    5College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
  • Received:2022-08-23 Online:2023-06-15 Published:2023-03-07
  • Contact: CAI Hua-Jian E-mail:caihj@psych.ac.cn

Abstract:

As a fundamental technique in natural language processing (NLP), word embedding quantifies a word as a low-dimensional, dense, and continuous numeric vector (i.e., word vector). This process is based on machine learning algorithms such as neural networks, through which semantic features of a word can be extracted automatically. There are two types of word embeddings: static and dynamic. Static word embeddings aggregate all contextual information of a word in an entire corpus into a fixed vectorized representation. The static word embeddings can be obtained by predicting the surrounding words given a word or vice versa (Word2Vec and FastText) or by predicting the probability of co-occurrence of multiple words (GloVe) in large-scale text corpora. Dynamic or contextualized word embeddings, in contrast, derive a word vector based on a specific context, which can be generated through pre-trained language models such as ELMo, GPT, and BERT. Theoretically, the dimensions of a word vector reflect the pattern of how the word can be predicted in contexts; however, they also connote substantial semantic information of the word. Therefore, word embeddings can be used to analyze semantic meanings of text.
In recent years, word embeddings have been increasingly applied to study human psychology. In doing this, word embeddings have been used in various ways, including the raw vectors of word embeddings, vector sums or differences, absolute or relative semantic similarity and distance. So far, the Word Embedding Association Test (WEAT) has received the most attention. Based on word embeddings, psychologists have explored a wide range of topics, including human semantic processing, cognitive judgment, divergent thinking, social biases and stereotypes, and sociocultural changes at the societal or population level. Particularly, the WEAT has been widely used to investigate attitudes, stereotypes, social biases, the relationship between culture and psychology, as well as their origin, development, and cross-temporal changes.
As a novel methodology, word embeddings offer several unique advantages over traditional approaches in psychology, including lower research costs, higher sample representativeness, stronger objectivity of analysis, and more replicable results. Nonetheless, word embeddings also have limitations, such as their inability to capture deeper psychological processes, limited generalizability of conclusions, and dubious reliability and validity. Future research using word embeddings should address these limitations by (1) distinguishing between implicit and explicit components of social cognition, (2) training fine-grained word vectors in terms of time and region to facilitate cross-temporal and cross-cultural research, and (3) applying contextualized word embeddings and large pre-trained language models such as GPT and BERT. To enhance the application of word embeddings in psychological research, we have developed the R package “PsychWordVec”, an integrated word embedding toolkit for researchers to study human psychology in natural language.

Key words: natural language processing, word embedding, word vector, semantic representation, semantic relatedness, Word Embedding Association Test (WEAT)

CLC Number: