This paper presents a practical approach to detecting emotional manipulation in social media advertisements using NLP-based sentiment analysis and a novel Emotional Manipulation Index (EMI). Our system analyzes ad text, quantifies sentiment with VADER, and integrates sentiment intensity with supplementary features to compute the EMI. We evaluated the method on 10,062 advertisement samples from our experimental dataset. Sentiment and manipulation-level distributions are provided, along with a discussion of the ethical implications for advertising. Key quantitative results are detailed in the Results section.
Introduction
The paper examines the ethical concerns of social media advertising that manipulates user emotions. It proposes an analytics pipeline combining NLP sentiment analysis with a novel Emotional Manipulation Index (EMI), which quantifies emotional intensity and the likelihood of manipulative intent. The EMI is computed using a weighted formula that incorporates sentiment polarity, arousal, and dominance, and is used to classify ads as Non-Manipulative, Moderately Manipulative, or Highly Manipulative.
In an analysis of 10,062 ads, sentiment distribution was 43.3% neutral, 37.4% positive, and 19.4% negative. EMI results showed 48.2% non-manipulative, 40.5% moderately manipulative, and 11.3% highly manipulative ads. Ads with strong emotional polarity and intensity markers correlated with higher EMI scores. The study demonstrates the potential for automated tools to detect emotionally manipulative content, while noting limitations such as reliance on text-only cues and possible dataset labeling bias.
Conclusion
We introduced an empirical methodology to identify emotionally manipulative social media ads with NLP and an Emotional Manipulation Index. Processing 10062 samples yielded a mean EMI of 0.359 and a manipulation level distribution as indicated above. Integrating multimodal features (images and video), optimal EMI weighting with human-labeled ground truth, and deploying the system for real-time surveillance are tasks for future work.
References
[1] Buechel, S., & Hahn, U.( 2017). EmoBank Corpus of Emotion Reflections.
[2] Hutto, C., & Gilbert, E.( 2014). VADER Apenurious Rule- Grounded Model for Se ntiment Analysis of Social Media Text.
[3] Devlin, J., et al.( 2019). BERTPre- Training of Deep Bidirectional Mills for Language Understanding.
[4] IEEE Deals on Affective Computing, Vol. 12( 3), 2021.
[5] Data collected from public Kaggle and Meta Ad translucency libraries.