The rapid evolution of the job market and the growing emphasis on employability have necessitated advanced tools for assessing students’ placement readiness and identifying skill gaps. This research proposes “Placify,” an AI-based system leveraging machine learning techniques for placement prediction and skill gap detection among college students. Motivated by the limitations of traditional, expert-driven approaches and the scalability offered by AI, Placify employs models such as Random Forest, XGBoost, and Artificial Neural Networks to analyze academic, demographic, and skill-related features. The system achieves high predictive performance, with Random Forest models attaining approximately 91% accuracy in placement prediction tasks. Beyond prediction, Placify integrates skill gap analytics, drawing from contemporary developments in large language models for prerequisite skill inference to offer actionable insights for learners and institutions. The study underscores the potential of AI-driven employability analytics in supporting personalized learning and institutional strategy, and discusses future integration with real-time job market data and automated resume parsing. This work contributes to the growing body of employability analytics and demonstrates how AI can drive scalable, data-driven educational and career guidance.
Introduction
The transition from academia to employment is a pivotal stage for college students, requiring accurate assessment of placement readiness and skill development. Traditional manual evaluations are limited by scale and subjectivity. To address this, Placify is proposed as an AI-based platform that uses machine learning (ML) and natural language processing (NLP) to (1) predict student placement outcomes and (2) identify individual skill gaps. By providing data-driven insights, Placify aims to enhance student employability and institutional decision-making.
The literature review highlights the growing role of AI in educational analytics. Earlier models used regression and decision trees, but recent studies show that advanced ML algorithms and large language models (LLMs) can infer skill relationships and predict outcomes with high accuracy. Works by Le & Abel (2024) and Weng (2024) demonstrate how LLMs and predictive modeling can automate complex, expert-driven assessments, while Hazan (2024) emphasizes optimization and model validation techniques essential for robust predictive systems.
The methodology involves collecting student data (academic, demographic, extracurricular, and skill-based), preprocessing it, and applying feature engineering to construct composite employability indicators. Three ML models—Random Forest, XGBoost, and Artificial Neural Networks (ANNs)—are trained using cross-validation and evaluated on accuracy, precision, recall, and AUROC.
Results show that Random Forest performs best (≈91% accuracy), followed by XGBoost (≈89%) and ANN (≈87%). The integrated LLM-based skill gap module effectively detects missing prerequisite skills by comparing student profiles with job requirements using semantic analysis.
Conclusion
Placify exemplifies the convergence of AI, machine learning, and educational analytics in addressing critical challenges of placement prediction and skill gap detection. By integrating ensemble and neural models with advanced natural language understanding, Placify delivers high-accuracy placement forecasts and interpretable, personalized skill recommendations. The Random Forest model, in particular, achieves an accuracy of 91%, affirming the efficacy of ensemble approaches in educational prediction tasks. The skill gap analysis module, inspired by recent advances in LLM-driven prerequisite skill inference [1], enables scalable, adaptive assessment of student readiness, reducing reliance on manual, expert-driven frameworks. Future work includes deeper integration with real-time labor market data, automated resume parsing, and the extension of LLM-based analytics for even finer-grained skill mapping and personalized learning pathways. Placify thus represents a significant step toward data-driven, equitable, and scalable employability solutions in higher education.
References
[1] N. L. Le and M.-H. Abel, “How Well Do LLMs Predict Prerequisite Skills? Zero-Shot Comparison to Expert-Defined Concepts,” arXiv preprint arXiv:2507.18479v1, 2025. [Online]. Available:
http://arxiv.org/pdf/2507.18479v1
[2] W.-H. Weng, “Machine Learning for Clinical Predictive Analyt- ics,” arXiv preprint arXiv:1909.09246v1, 2019. [Online]. Available: http://arxiv.org/pdf/1909.09246v1
[3] E. Hazan, “Lecture Notes: Optimization for Machine Learning,” arXiv preprint arXiv:1909.03550v1, 2019. [Online]. Available: http://arxiv.org/pdf/1909.03550v1