Cyberbullying is agrowing concern in online communities, often leading to severe emotional distress and social isolation. This project presents a cyberbullying detection system that employs GLOVEnet and DistillBERTa to analyze user-generated text and calculate a bullying percentage for each sentence. The system continuously monitors user behavior, dynamically reducing a reputation score based on detected bullying content. When a user\'s reputation score falls below a predefined threshold, they are automatically blocked from further interaction on various platforms. By combining deep learning and a reputation-based penalty mechanism, this system aims to mitigate cyberbullying incidents while maintaining a fair and proactive moderation process. The solution provides a scalable and effective approach for promoting healthier online environments.
Introduction
Cyberbullying involves harassing or intimidating individuals through digital platforms like social media and messaging, often causing severe emotional harm or even suicidal behavior. Detecting cyberbullying early is challenging but crucial for timely intervention. Adolescents, heavy users of social media, are particularly vulnerable. Addressing cyberbullying requires combining psychological, social, and technological strategies, with automatic detection playing a key role.
Traditional machine learning (ML) methods have been used to identify cyberbullying, but more advanced models like Deep Neural Networks (DNN) and transformer-based architectures (e.g., BERT, DistilBERTa) show improved accuracy. This study proposes a novel framework using DistilBERTa combined with GloVe embeddings, and compares various ML, deep learning, and transformer models, validating results with cross-validation. The goal is to create flexible detection models adaptable across multiple social media platforms.
The text discusses factors contributing to cyberbullying, such as online anonymity, boredom, and personal insecurities. A literature review highlights various approaches, including ensemble learning, federated learning (which preserves user privacy), hybrid deep learning models, and models incorporating emotional and sentiment analysis. Recent advances also include methods for detecting bullying severity and using weak supervision to identify bullying language.
Methodologically, datasets from social media are collected and labeled, followed by preprocessing (cleaning, tokenization, embedding). Transformer-based models like BERT are preferred for their deep contextual understanding, allowing subtle abusive content detection. Some systems integrate user reputation scoring and blocking mechanisms to discourage repeat offenders.
Conclusion
Distil BERTa is a highly effective model for cyberbullying detection, offering a strong balance between performance and efficiency. By leveraging its pre-trained language understanding and transformer-based architecture, it accurately identifies the complex and nuanced language often found in bullying content.
References
[1] Y. Xiao, et al., “Multi-Axis Feature Diversity Enhancement for RemoteSensingVideoSuper-Resolution,”IEEETransactionsonImageProcess-ing, vol. 34, 2025.
[2] . Shao, B. Du, C. Wu, M. Gong, and T. Liu, HRSiam: High-resolutionSiamese network, towards space-borne satellite video tracking, IEE-Trans. Image Process., vol. 30, pp. 30563068, 2021.
[3] . Kaselimi, A. Voodoos, I. Daskalopoulos, N. Doulamis, and A.Doulamis, A vision transformer model for convolution-free multilabelclassification of satellite imagery in deforestation monitoring,IEEETrans.NeuralNetw.Learn.Syst.,vol.34,no.7,pp.32993307,Jul.2023.
[4] Yang, X. Tang, Y.-M. Cheung, X. Zhang, and L. Jiao, SAG Semantic-awaregraphnetworkforremotesensingsceneclassification,IEEETrans.Image Process., vol. 32, pp. 10111025, 2023.
[5] Guo,Q.Shi,A.Marinoni,B.Du,andL.Zhang,Deepbuildingfootprintupdate network: A semi-supervised method for updating exist ingbuildingfootprintfrombi-temporalremotesensingimages,RemoteSens.Environ., vol. 264, Oct. 2021, Art. no. 112589.
[6] Li, W. He, W. Cao, L. Zhang, and H. Zhang, UANet: An un-certaintyaware network for building extraction from remote sensingimages,IEEE Trans. Geosci. Remote Sens., vol. 62, 2024, Art. no.5608513.
[7] He, X. Sun, W. Diao, Z. Yan, F. Yao, and K. Fu, Multimodal remotesensing image segmentation with intuition-inspired hypergraphmodeling,IEEE Trans. Image Process., vol. 32, pp. 14741487, 2023.
[8] Hou,Q.Cao,R.Ran,C.Liu,J.Li,andL.-J.Deng,Bidomainmodelingparadigm for pan sharpening, in Proc. 31st ACM Int. Conf.Multimedia,Oct. 2023, pp. 347357.
[9] Zhang, Q. Yuan, M. Song, H. Yu, and L. Zhang, Cooperated spectrallow-rankness prior and deep spatial prior for HSI unsupervised denois-ing, IEEE Trans. Image Process., vol. 31, pp. 63566368, 2022.
[10] Li,K.Zheng,W.Liu,Z.Li,H.Yu,andL.Ni,Model-guidedcoarse-to-fine fusion network for unsupervised hyperspectral image super-resolution, IEEE Geosci. Remote Sens. Lett., vol. 20, pp. 15,2023.
[11] N.Su,M.Gan,G.-Y.Chen,W.Guo,andC.L.P.Chen,HighSimilarity-Pass attention for single image super-resolution, IEEE Trans.
[12] .Jiang,Z.Wang,P.Yi,andJ.Jiang,Hierarchicaldenserecursivenetworkfor image super-resolution, Pattern Recognit., vol. 107, Nov.2020, Art.no.107475.
[13] .Chen,L.Zhang,andL.Zhang,Cross-scopespatialspectralinformationaggregation for hyperspectral image super-resolution, IEEE Trans.Image Process., vol. 33, pp. 58785891, 2024.
[14] .Protter,M.Elad,H.Takeda,andP.Milanfar,Generalizingthenonlocal-means to super-resolution reconstruction, IEEE Trans. Image Process.,vol. 18, no. 1, pp. 3651, Jan. 2009. [14] S. D. Babacan, R. Molina, and
[15] A. K. Katsaggelos, Variational Bayesian super resolution, IEEE Trans.Image Process., vol. 20, no. 4, pp. 984999, Apr. 2011.
[16] Chen, L. Zhang, and L. Zhang, MSDformer: Multiscale deformabletransformer for hyperspectral image super-resolution, IEEE Trans.Geosci. Remote Sens., vol. 61, 2023, Art. no. 5525614.
[17] Xiao, D. Kai, Y. Zhang, X. Sun, and Z. Xiong, Asymmetric eventguided video super-resolution, in Proc. ACM Int. Conf. Multimedia,2024, pp. 24092418.
[18] Xu, L. Zhang, B. Du, and L. Zhang, Hyperspectral anomaly detectionbased on machine learning: An overview, IEEE J. Sel. Topics Appl.Earth Observ. Remote Sens., vol. 15, pp. 33513364, 2022.
[19] Wang, K. C. K. Chan, K. Yu, C. Dong, and C. C. Loy, EDVR: Videorestoration with enhanced deformable convolutional networks, in Proc.IEEE/CVFConf.Comput.Vis.PatternRecognit.Workshops(CVPRW),Jun. 2019, p. 0.
[20] . C. K. Chan, X. Wang, K. Yu, C. Dong, and C. C. Loy, BasicVSR: Thesearchforessentialcomponentsinvideosuper-resolutionandbeyond,inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun.2021, pp. 49474956.
[21] .Isobe,X.Jia,S.Gu,S.Li,S.Wang,andQ.Tian,Videosuperresolutionwith recurrent structure-detail network, in Proc. Eur. Conf. Comput.,Aug. 2020, pp. 645660.
[22] .Xiao,D.Kai,Y.Zhang,Z.-J.Zha,X.Sun,andZ.Xiong,Eventadaptedvideo super-resolution, in Proc. Eur.Conf. Comput. Vis., Oct. 2024, pp.217235.
[23] . Jo, S. W. Oh, J. Kang, and S. J. Kim, Deep video super resolutionnetworkusingdynamicupsamplingfilterswithoutexplicitmotioncom-sensation,inProc.IEEE/CVFConf.Comput.Vis.PatternRecognit.,Jun.2018, pp. 32243232.
[24] . Yu, J. Liu, L. Bo, and T. Mei, Memory-augmented non-local attentionfor video super-resolution, in Proc. IEEE/CVF Conf. Comput. Vis.Pattern Recognit. (CVPR), Jun. 2022, pp. 1783417843.
[25] . Xiao et al., Local-global temporal di erence learning for satellite videosuper-resolution,IEEETrans.CircuitsSyst.VideoTechnol.,vol.34,no.4, pp. 27892802, Apr. 2024.
[26] . S. M. Sajjadi, R. Vemulapalli, and M. Brown, Frame-recurrent videosuper-resolution,inProc.IEEE/CVFConf.Comput.Vis.PatternRecog-nit., Jun. 2018, pp. 66266634.
[27] Isobe, F. Zhu, X. Jia, and S. Wang, Revisiting temporal modeling forvideo super-resolution, 2020, arXiv:2008.05765.
[28] C.K.Chan,S.Zhou,X.Xu,andC.C.Loy,BasicVSR++:Improving video super-resolution with enhanced propagation and alignment, inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun.2022, pp. 59725981.
[29] Liu,H.Yang,J.Fu,andX.Qian,Learningtrajectory-awaretransformerfor video super-resolution, in Proc. IEEE/CVF Conf. Comput.Vis.Pattern Recognit. (CVPR), Jun. 2022, pp. 56875696.
[30] Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos, Video superresolution with convolutional neural networks, IEEE Trans. Comput.Imag., vol. 2, no. 2, pp. 109122, Jun. 2016.
[31] Haris, G. Shakhnarovich, and N. Ukita, Recurrent back-projectionnetwork for video super-resolution, in Proc. IEEE/CVF Conf. Comput.Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 38973906.
[32] Huang,W. Wang, and L. Wang, Bidirectional recurrent convolutionalnetworks for multi-frame super-resolution, in Proc. Adv. Neural Inf.Process. Syst., vol. 28, Dec. 2015, pp. 235243.
[33] Ilg,N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox,FlowNet2.0: Evolutionofoptical flow estimation with deep networks,inProc.IEEEConf.Comput.Vis.PatternRecognit.(CVPR),Jul.2017, pp.24622470.
[34] RanjanandM.J.Black,Opticalflowestimationusingaspatialpyramidnetwo rk,in Proc.IEEEConf.Comput.Vis.Pattern Recognit.(CVPR),Jul. 2017, pp. 41614170.
[35] Caballeroetal.,Real-timevideosuper-resolutionwithspatialtemporalnetworksandmotioncompensation,inProc.IEEEComput.Soc.Conf.Comput. Vis. Pattern Recognit., Jun. 2017, pp. 47784787.
[36] Wang, Y.Guo, L.Liu,Z.Lin, X.Deng, and W.An, Deepvideosuper-resolution using HR optical flow estimation, IEEE Trans. ImageProcess., vol. 29, pp. 43234336, 2020.
[37] Shi, J. Gu, L. Xie, X. Wang, Y. Yang, and C. Dong, Rethinkingalignment in video super-resolution transformers,in Proc.Adv.NeuralInf. Process. Syst., 2022, pp. 3608136093.
[38] Wen, W. Ren, Y. Shi, Y. Nie, J. Zhang, and X. Cao, Video superresolutionviaaspatial-temporalalignmentnetwork,IEEETrans.ImageProcess., vol. 31, pp. 17611773, 2022.
[39] Tian, Y. Zhang, Y. Fu, and C. Xu, TDAN: Temporally-deformablealignmentnetworkforvideosuper-resolution,inProc.IEEE/CVFConf.Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 33603369.
[40] Yi,Z.Wang,K.Jiang,J.Jiang,andJ.Ma, Progressivefusionvideo super-resolution network via exploiting non-local spatial-temporalcorrelations, in Proc. IEEE/CVF Int. Conf. Comput. Vis.(ICCV), Oct.2019, pp. 31063115.
[41] Luo, L. Zhou, S. Wang, and Z. Wang, Video satellite imagery superresolution via convolutional neural networks, IEEE Geosci. RemoteSens.Lett.,vol.14,no.12,pp.23982402,Dec.2017.[41]A.Xiao,
[42] Z. Wang, L. Wang, and Y. Ren, Super-resolution for jilin-1 satellitevideo imagery via a convolutional network, Sensors, vol. 18, no. 4, p.1194,2018.
[43] Jiang,Z.Wang,P.Yi,andJ.Jiang,Aprogressivelyenhancednetworkforvideosatelliteimagerysuperresolution,IEEESignalProcess.Lett.,vol. 25, no. 11, pp. 16301634, Nov. 2018.
[44] Liu,Y. Gu,T. Wang, and S.Li, Satellitevideosuper-resolutionbased on adaptively spatiotemporal neighbors and nonlocal similarityregularization, IEEE Trans. Geosci. Remote Sens., vol. 58, no. 12, pp.83728383, Dec. 2020.
[45] LiuandY. Gu,Deepjointestimationnetwork forsatellitevideosuper-resolution with multiple degradations, IEEE Trans. Geosci. RemoteSens., vol. 60, 2022, Art. no. 5621015.
[46] HeandD.He,Aunifiednetworkforarbitraryscalesuper-resolutionofvideo satelliteimages,IEEETrans.Geosci.RemoteSens.,vol.59,no.10, pp. 88128825, Oct. 2021.
[47] Xiao, X. Su, Q. Yuan, D. Liu, H. Shen, and L. Zhang, Satellite videosuper-resolution via multiscale deformable convolution alignment andtemporal grouping projection, IEEE Trans. Geosci. Remote Sens., vol.60, 2022, Art. no. 5610819.
[48] Ni and L. Zhang, Deformable convolution alignment and dynamicscale-aware network for continuous-scale satellite video superresolution,IEEE Trans. Geosci. Remote Sens., vol. 62, 2024, Art. no.5610017.
[49] Bako et al.., Kernel-predicting convolutional networks for denoisingMonte Carlo renderings, ACM Trans. Graph., vol. 36, no. 4, pp. 114,Aug.2017.
[50] Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll,Burst denoising with kernel prediction networks, in Proc. IEEE/CVFConf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 25022510.
[51] Jiang, B. Wronski, B. Mildenhall, J. T. Barron, Z. Wang, and T. Xue,Fast and high quality image denoising via malleable convolution, inProc. Eur. Conf. Comput. Vis., 2022, pp. 429446.
[52] Chen, X. Dai, M. Liu, D. Chen, L. Yuan, and Z. Liu, Dynamic convolution:Attentionoverconvolutionkernels,inProc.IEEE/CVFConf.Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 1103011039.
[53] Li, X. Tao, T. Guo, L. Qi, J. Lu, and J. Jia, MuCAN: Multicorrespondence aggregation network for video super-resolution, in Proc.16th Eur. Conf. Comput. Vis. (ECCV), Glasgow, U.K. Springer, Aug.2020, pp. 335351.
[54] Isobe et al.., Video super-resolution with temporal group attention, inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun.2020, pp. 80088017