Tornar a Working Papers

Paper #1864

Títol:
Uncovering the semantics of concepts using GPT-4 and Other recent large language models
Autors:
Gaël Le Mens, Balász Kovács, Michael T. Hannan i Guillem Pros
Data:
Juny 2023
Resum:
Recently, the world's attention has been captivated by Large Language Models (LLMs) thanks to OpenAI's Chat-GPT, which rapidly proliferated as an app powered by GPT-3 and now its successor, GPT-4. If these LLMs produce human-like text, the semantic spaces they construct likely align with those used by humans for interpreting and generating language. This suggests that social scientists could use these LLMs to construct measures of semantic similarity that match human judgment. In this article, we provide an empirical test of this intuition. We use GPT-4 to construct a new measure of typicality– the similarity of a text document to a concept or category. We evaluate its performance against other model-based typicality measures in terms of their correspondence with human typicality ratings. We conduct this comparative analysis in two domains: the typicality of books in literary genres (using an existing dataset of book descriptions) and the typicality of tweets authored by US Congress members in the Democratic and Republican parties (using a novel dataset). The GPT-4 Typicality measure not only meets or exceeds the current state-of-the-art but accomplishes this without any model training. This is a breakthrough because the previous state-of-the-art measure required fine-tuning a model (a BERT text classifier) on hundreds of thousands of text documents to achieve its performance. Our comparative analysis emphasizes the need for systematic empirical validation of measures based on LLMs: several measures based on other recent LLMs achieve at best a moderate correspondence with human judgments.
Paraules clau:
categories, concepts, deep learning, typicality, GPT, chatGPT, BERT, Similarity
Codis JEL:
C18, C52
Àrea de Recerca:
Economia Política
Publicat a:
Proceedings of the National Academy of Sciences (PNAS), 120 (49) e2309350120, pp. 1-7 https://doi.org/10.1073/pnas.2309350120

Descarregar el paper en format PDF (892 Kb)

Cercar Working Papers


Per data:
-cal seleccionar un valor a les quatre llistes desplegables-



Consultes Predefinides