University of Limerick
Browse

Comparing large language models and human annotators in latent content analysis of sentiment, political leaning, emotional intensity and sarcasm

Download (1.57 MB)
journal contribution
posted on 2025-05-02, 14:00 authored by Ljubiša Bojić, Zagovora, Olga, Asta Zelenkauskaite, Vuk Vuković, Milan Čabarkapa, Selma Veseljević Jerković, Ana JovančevićAna Jovančević

In the era of rapid digital communication, vast amounts of textual data are generated daily, demanding efficient methods for latent content analysis to extract meaningful insights. Large Language Models (LLMs) offer potential for automating this process, yet comprehensive assessments comparing their performance to human annotators across multiple dimensions are lacking.This study evaluates the inter-rater reliability, consistency, and quality of seven state-of-the-art LLMs.These include variants of OpenAI’sGPT-4,Gemini, Llama-3.1-70B, and Mixtral 8×7B.Their performance is compared to human annotators in analyzing sentiment, political leaning, emotional intensity, and sarcasm detection.The study involved 33 human annotators and eight LLM variants assessing 100 curated textual items.This resulted in 3,300 human and 19,200 LLM annotations. LLM performance was also evaluated across three-time points to measure temporal consistency.The results reveal that both humans and most LLMs exhibit high inter-rater reliability in sentiment analysis and political leaning assessments, with LLMs demonstrating higher reliability than humans. In emotional intensity, LLMs displayed higher reliability compared to humans, though humans rated emotional intensity significantly higher. Both groups struggled with sarcasm detection, evidenced by low reliability. Most LLMs showed excellent temporal consistency across all dimensions, indicating stable performance over time.This research concludes that LLMs, especiallyGPT-4, can effectively replicate human analysis in sentiment and political leaning, although human expertise remains essential for emotional intensity interpretation.The findings demonstrate the potential of LLMs for consistent and high-quality performance in certain areas of latent content analysis.

History

Publication

Scientific Reports, 2025, 15, 11477

Publisher

Nature Research

Department or School

  • Psychology

Usage metrics

    University of Limerick

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC