Phishing attacks remain a persistent cybersecurity threat, exploiting human judgment rather than technical systems. Although automated filters intercept a large share of malicious email, a portion still reaches end users, and the quality of the interface then becomes the decisive factor. Current warning designs are largely ineffective: generic, visually subtle, and repeated so often that users habituate to them quickly. This paper proposes a human-centered design framework grounded in HCI theory, persuasive technology, and visual communication research. A functional Chrome extension prototype was built to implement the framework, and its effectiveness evaluated through a preliminary survey, an expert heuristic review, a behavioral detection study, and a one-week longitudinal follow-up. Detection accuracy improved substantially, phishing link click-through rates dropped by roughly 40%, and usability remained strong across experience levels. Performance held stable over the follow-up period, with no evidence of habituation. The findings suggest that interface design is an underutilized lever in anti-phishing defense, and that thoughtfully designed warnings can improve security outcomes without degrading user experience.
Introduction
Phishing is one of the most common forms of cybercrime and is responsible for a large portion of data breaches worldwide. Reports show that phishing accounted for about one-third of data breaches in 2023, causing significant financial losses for organizations. Unlike other cyberattacks that exploit technical vulnerabilities, phishing targets human decision-making by using tactics such as urgency, impersonation, and misleading links. Although detection technologies have improved, many phishing messages still reach users, and the email or browser interface becomes the final line of defense. However, current warning systems are often ineffective because users misunderstand security indicators, ignore repetitive alerts, or fail to notice them.
Most research on anti-phishing focuses on detection algorithms and machine learning, while interface design—the part that communicates threats to users—has received less attention. Existing interfaces often lack clear explanations, produce repetitive warnings that users ignore, place indicators outside the user’s attention area, and fail to account for differences in user expertise.
The paper proposes a new interface design framework to help users identify phishing attacks more effectively. It introduces three main design principles:
Dynamic and Contextual Visual Communication – Instead of generic warnings, the system provides clear, visual annotations that explain specific suspicious elements in an email or website, such as domain mismatches or urgency language. Color coding indicates threat severity.
Persuasive Technology and Behavioral Nudges – The system encourages safer behavior through interactive tooltips, safe preview options, and strategic delays before users can access high-risk links. These features give users time to review warnings and make better decisions.
Adaptive and Personalized Interfaces – The interface adjusts warnings based on the user’s expertise and context. Beginners receive detailed explanations and guidance, while experts see compact summaries with advanced diagnostic tools.
To test these ideas, researchers developed a Chrome extension prototype that analyzes emails using authentication checks (DMARC, SPF, DKIM), URL analysis, and content analysis for suspicious language. The system generates adaptive warnings and stores user profiles locally for privacy. All analysis is performed on the user’s device to ensure security.
Overall, the proposed system demonstrates that better interface design can significantly improve users’ ability to detect phishing, reduce risky clicks, and strengthen cybersecurity by making warnings clearer, more informative, and tailored to individual users.
Conclusion
Phishing succeeds because it exploits human decision-making, and the interface is where human decision-making and security technology meet. Current interface designs fail at this juncture, but the failure is not inevitable. A framework built on three principles from HCI, persuasive technology, and behavioral science produces substantial improvements in both detection accuracy and click-through behavior, while maintaining usability at a level users find acceptable.
The evaluation results support four specific claims. Detection accuracy can be raised substantially through explanatory, context-specific warnings. Phishing link clicks can be reduced by roughly 40% through strategic friction and behavioral nudges. These effects persist over at least one week without habituation. The improvements are achievable within a deployable browser extension without compromising user privacy.
The broader implication is that interface design deserves a more prominent role in the anti-phishing research agenda. Marginal improvements in back-end detection rates are valuable, but they do not address the portion of the problem that reaches users. A well-designed interface can convert that portion from a persistent vulnerability into a genuine defensive capability.
References
[1] [ “2024 Data Breach Investigations Report | Verizon.” Accessed: Dec. 29, 2025. [Online]. Available:
https://www.verizon.com/business/resources/reports/dbir.html
[2] “apwg_trends_report_q4_2023.” Accessed: Mar. 05, 2026. [Online]. Available: https://docs.apwg.org/reports/apwg_trends_report_q4_2023.pdf
[3] A. P. Felt et al., “Improving SSL Warnings: Comprehension and Adherence,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul Republic of Korea: ACM, Apr. 2015, pp. 2893–2902. doi: 10.1145/2702123.2702442.
[4] S. Egelman, L. F. Cranor, and J. Hong, “You’ve been warned: an empirical study of the effectiveness of web browser phishing warnings,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, in CHI ’08. New York, NY, USA: Association for Computing Machinery, Apr. 2008, pp. 1065–1074. doi: 10.1145/1357054.1357219.
[5] C. Bravo-Lillo, L. F. Cranor, J. Downs, and S. Komanduri, “Bridging the Gap in Computer Security Warnings: A Mental Model Approach”, Accessed: Dec. 29, 2025. [Online]. Available: https://www.computer.org/csdl/magazine/sp/2011/02/msp2011020018/13rRUxbCbrM
[6] R. Dhamija, J. D. Tygar, and M. Hearst, “Why phishing works,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, in CHI ’06. New York, NY, USA: Association for Computing Machinery, Apr. 2006, pp. 581–590. doi: 10.1145/1124772.1124861.
[7] “2022_ic3report.pdf.” Accessed: Mar. 05, 2026. [Online]. Available: https://www.ic3.gov/AnnualReport/Reports/2022_ic3report.pdf
[8] A. Vishwanath, T. Herath, R. Chen, J. Wang, and H. R. Rao, “Why do people get phished? Testing individual differences in phishing vulnerability within an integrated, information processing model,” Decis Support Syst, vol. 51, no. 3, pp. 576–586, Jun. 2011, doi: 10.1016/j.dss.2011.03.002.
[9] R. H. Thaler and C. R. Sunstein, Nudge: Improving decisions about health, wealth, and happiness. in Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT, US: Yale University Press, 2008, pp. x, 293.
[10] R. W. Rogers, “A Protection Motivation Theory of Fear Appeals and Attitude Change1,” J. Psychol., vol. 91, no. 1, pp. 93–114, Sep. 1975, doi: 10.1080/00223980.1975.9915803.
[11] P. Kumaraguru, S. Sheng, A. Acquisti, L. F. Cranor, and J. Hong, “Teaching Johnny not to fall for phish,” ACM Trans Internet Technol, vol. 10, no. 2, p. 7:1-7:31, Jun. 2010, doi: 10.1145/1754393.1754396.