摘要
Abstract
Emotion is a structural resource in human cognition that guides attention,memory,and social coordination.During incidental vocabulary acquisition(IVA),readers often internalize the affective tone of surrounding discourse and transfer it to novel words("contextual emotion transfer/semantic prosody").Recent LLMs appear to display analogous behavior despite lacking embodiment,raising the question of whether they can acquire contextual emotion in a human-like manner and whether the same contextual factors shape both human and model learning.Building on usage-based and distributional accounts,we expected two robust regularities to hold across agents:a positivity advantage(higher contextual valence predicts better learning)and a variability advantage(varied contexts outperform repeated ones).We further hypothesized that,in more demanding recall(definition generation),contextual valence would interact with variability,such that positive emotion would amplify the benefits of varied contexts.
We conducted zero-shot,parallel evaluations with four representative LLMs(Ernie Bot 3.5,ChatGPT/GPT-4,Gemini 1.5 Pro,LLaMA 3.1-8B)and three human cohorts matched to prior IVA paradigms(English L1,Chinese L1,English L2;306 participants).Each agent learned nine pseudowords embedded in 45 two-sentence texts spanning positive,neutral,and negative contexts;context variability was manipulated between repeated versus varied exposures.After reading,LLMs completed(a)valence rating and sentence production(emotion transfer)and(b)orthographic choice,definition matching,and definition generation(form/meaning).LLMs were evaluated in strictly isolated zero-shot sessions with no task-specific supervision or fine-tuning.Ordinal mixed-effects models(CLMM)analyzed ratings;linear/logistic mixed-effects models analyzed production and accuracy,with random effects for participant/LLM session,item,and denotation class.
Contextual emotion transferred reliably to targets:across humans and LLMs,ratings followed positive>neutral>negative,and generated sentences aligned in polarity with the learning context.For vocabulary learning,both groups exhibited a positivity advantage—higher contextual valence significantly predicted better meaning performance—and a variability advantage—varied contexts significantly outperformed repeated contexts in definition matching and definition generation.In recall,valence interacted with variability:positive emotion amplified gains under varied exposure for both humans and LLMs,yielding the largest improvements in definition generation.LLMs frequently matched or exceeded human accuracy in form recognition and often reached higher overall accuracy on meaning tasks while preserving the same qualitative patterns.These effects held in mixed-effects analyses controlling for participant/session,item,and denotation,and were observed without providing LLMs with examples,feedback,or fine-tuning.
The study showed that LLMs did acquire contextual emotion and reproduced core human regularities(positivity and variability advantages;valence-by-variability interaction in recall).We interpret the convergence via a Dual-Mechanism perspective:human emotion learning is embodied and socially situated,whereas LLM"emotion"arises from distributional co-occurrence and vector-space optimization;distinct mechanisms can yield functionally similar behavior.The findings advance theories of emotion-language interaction and support context variability as a general driver of vocabulary learning.Practically,emotion-sensitive LLM behavior can enhance educational and communicative applications,while necessitating safeguards against unintended amplification of corpus-borne affective biases.关键词
大语言模型/零样本学习/情感学习Key words
large language models/zero-shot learning/emotion learning分类
社会科学