As AI tools such as ChatGPT, Grammarly, and QuillBot are increasingly adopted by higher education institutions, the impact of these tools on academic work is significant, but complex. This study examines how college students are using, and more importantly, how they perceive AI tools in relation to their learning, productivity, and academic honesty. In this mixed-methods study (survey data and literature) we have found that while students use AI tools for convenience, speed and/or assistance in writing and comprehension, they are wary of its use and over-reliance, the ethical grey areas AI raises, and encountering a diminished sense of original thinking. The results of this study indicate that while students are not blindly reliant, they nonetheless identify as cautious, with usage of AI tools grounded in an understanding of practicality, reasonable limitations, and what could be considered a moral stance. Higher education cannot remain passive observers; rather, institutions need to engage with students through proactive policies, ethical training, and academic support for their students. As AI continues to change the nature of their work, a more intentional, honest, and reflective position is essential in defining innovation and integrity.
Introduction
Artificial intelligence has become an integral part of academic life for college students in Mumbai, where fast-paced and competitive environments encourage the use of tools such as ChatGPT, Grammarly, and QuillBot for writing, research, coding, grammar correction, and idea generation. Students increasingly view AI as a routine academic aid—similar in importance to the internet—valued for saving time, improving clarity, and boosting productivity. However, this growing reliance has also raised concerns about over-dependence, reduced independent thinking, originality, fairness, privacy, and academic integrity.
The literature reviewed shows a consistent pattern across Indian and international studies: students generally perceive AI positively for efficiency, conceptual understanding, and language support, but remain cautious about accuracy, “hallucinated” outputs, loss of authentic voice, and ethical boundaries. Faculty and researchers highlight risks such as the “crutch effect,” shallow learning, and misuse due to unclear institutional policies. Many studies emphasize that guided and structured use of AI, supported by clear institutional frameworks and faculty mentorship, significantly improves learning outcomes while mitigating ethical risks.
This research focuses on understanding how undergraduate students in Mumbai use AI tools, their awareness of AI applications, perceived benefits and challenges, ethical concerns, and the influence of demographic factors. Data were collected from 238 undergraduate students (aged 17–22) using questionnaires and interviews, employing purposive and stratified sampling methods.
Key findings indicate that:
Students’ academic practices are moderately to highly integrated with AI tools.
AI is widely perceived as a supportive assistant, not a replacement for human learning or faculty.
Higher AI awareness is associated with more frequent use and greater confidence.
Frequent AI users report higher perceived academic usefulness and confidence.
Significant differences exist in AI perceptions across usage frequency and year of study, with senior students generally finding AI more useful.
Ethical uncertainty and lack of clear institutional guidance remain major concerns.
ANOVA analysis confirms statistically significant relationships between AI usage frequency, perceived usefulness, confidence levels, and year of study. Overall, the study concludes that while AI offers substantial academic benefits, its responsible and effective integration depends heavily on institutional policies, ethical guidelines, and faculty-led guidance to balance efficiency with deep, independent learning.
Conclusion
The evidence presented in this study illustrates the growing integration of Artificial Intelligence into the academic landscape of students in Mumbai. As a considerable amount of the student population reports that they use AI tools such as ChatGPT, Grammarly, and other AI-based platforms to support productivity, improve clarity, and ensure quality in their academic work, a positive correlation exists between AI usage and academic performance, enhancing the potential impact of AI. Additionally, the findings of the research illustrate that despite differences in the extent of familiarity and acceptance of AI tools in their academic work across different academic streams and disciplines, in part due to exposure and challenges relating to overall digital literacy, students in general report higher levels of engagement with AI tools. Slightly more innovative and positive in their acceptance of AI tools were the Science and Engineering students. Two main areas of focus built upon differences showed students in Arts and Commerce specialties report needing more support in their education to take full advantage of the potential offered by AI. Research findings also suggest that students are excited about the possible value of AI tools in their education, while also recognizing ethical issues with technology, ez, over-reliance as learners and educators, and the need for responsible use.
The research confirms the importance of AI\'s role in the current educational landscape and highlights significant gaps in awareness, access, and institutional support. Students require support not only to use tools already available, but to understand the AI choices related to originality, data protection, and academic honesty. The ramifications for educators and institutions are significant. Faculty must embrace using AI in their pedagogy while still maintaining the principles of academic integrity and critical thinking. Institutions can also better support educators and students by making training related to AI formal, constructing a full policy on usage, and limiting unequal access to digital tools used across departments and other disciplines. In the end, a transition to AI in academia is not a technological change—it is a cultural and pedagogical change that needs to be managed responsibly, equitably, and ethically in order to take full advantage of its educational potential.
References
[1] Joshi, A., et al. (2023). With great power comes great responsibility! Exploring ChatGPT adoption in Indian engineering colleges. [Institutional Research Report/Conference Paper].
[2] Budhiraja, R., et al. (2024). It’s not like Jarvis… Perceptions of Indian CS undergraduates toward ChatGPT in coding tasks. [Unpublished manuscript/Research Study].
[3] Wang, L., et al. (2024). Scaffold or crutch? STEM students\' use of generative AI in academic problem-solving. [Journal Name, Volume (Issue), Pages].
[4] Bhullar, J., Joshi, A., & Chugh, R. (2024). ChatGPT in higher education—A literature synthesis. [Journal Name, Volume (Issue), Pages].
[5] Joshi, A., et al. (2024). ChatGPT in the classroom: Strengths and weaknesses among Indian CS undergraduates. [Quantitative Study, Institutional Report].
[6] Pradhan, A., et al. (2023). Effect of ChatGPT on students’ creativity and productivity in academic work. [University Study Report].
[7] Roy, P., &Swargiary, L. (2024). ChatGPT impact on EFL Indian undergraduates: An intervention-based study. [Journal Name, Volume (Issue), Pages].
[8] Mahapatra, S. (2024). Impact of ChatGPT on ESL students’ academic writing skills: A structured intervention at BITS Pilani. [Journal/Institutional Report].
[9] Gururaj, P. (2024). Artificial intelligence tools in academic writing assistance: Usage patterns in Indian colleges. [Journal Name, Volume (Issue), Pages].
[10] Misra, K., &Chandwar, R. (2023). ChatGPT, artificial intelligence, and scientific writing: Ethical boundaries in academic publishing. [Journal of Scientific Ethics, Volume (Issue), Pages].
[11] Clark, E. (2025). Authors, wordsmiths, and ghostwriters: The ethical tension of using AI in academic writing. [Journal of Academic Writing Studies, Volume (Issue), Pages].
[12] Springer, A. (2023). AI-generated feedback on writing: Comparing student perception of AI vs teacher support. [Educational Technology Journal, Volume (Issue), Pages].
[13] MDPI. (2024). Evaluating the impact of artificial intelligence tools on enhancing student academic performance: A TAM-based approach. Education Sciences, 15(6), Article 735. https://doi.org/10.3390/educsci15060735
[14] Discover Education. (2025). Human insight vs artificial assistance: A student preference study. [Journal Name, Volume (Issue), Pages].
[15] International Journal of Educational Integrity. (2024). AI-generated essays and the new plagiarism frontier. IJ Educational Integrity, Volume (Issue), Pages.
[16] Smart Learning Environments. (2024). Using ChatGPT to support ESL academic writing: A controlled study. Smart Learn. Environ., Volume (Issue), Pages.
[17] Xu, J. (2025). AI-generated content and disciplinary citation norms in academic writing. [Cross-Disciplinary Study, Journal Name].
[18] Kamalov, F., et al. (2023). Ethical and practical implications of AI tools in education: A global review. [Journal Name, Volume (Issue), Pages].
[19] Ma, Y., et al. (2024). Debugging with ChatGPT: Undergraduate experience in intro to Python. [Journal of Computer-Assisted Learning, Volume (Issue), Pages].
[20] Hyeon Jo. (2024). From concerns to benefits: A comprehensive study of ChatGPT usage in education. Education and Information Technologies, Volume (Issue), Pages.