The increasing complexity of machine learning (ML) workflows poses challenges for beginners, educators, and non-technical users, often requiring programming expertise and offering limited model interpretability. AIMEX (Automated Intelligent Modeling and Explainability System) is a web-based framework that integrates Automated Machine Learning (AutoML) with Explainable AI (XAI) to provide an accessible, user-friendly platform. Using the TPOT framework, AIMEX enables users to upload structured datasets, select target variables, and generate optimized models for classification or regression tasks, with automated preprocessing and performance evaluation. It incorporates SHAP (SHapley Additive exPlanations) for intuitive visualizations of feature contributions, enhancing model transparency. Additionally, an Educational Mode powered by LLaMA via the Ollama API delivers simplified natural-language explanations of datasets, model outputs, and feature importances, catering to learners and non-experts. AIMEX bridges automation, interpretability, and education, serving as a practical and pedagogical tool for academic and experimental ML applications.
Introduction
AIMEX is an integrated platform designed to simplify machine learning (ML) for non-experts while ensuring model transparency and reproducibility. It addresses key challenges in ML, such as complex preprocessing, algorithm selection, hyperparameter tuning, and model interpretability, by combining automation, explainable AI, and educational support.
Key Components:
TPOT: Automates ML pipeline creation via genetic programming, handling feature engineering, model selection, and hyperparameter optimization. Supports classification and regression tasks.
SHAP: Provides explainable AI capabilities by quantifying feature contributions and visualizing them through summary and force plots.
Streamlit: Offers an interactive, user-friendly web interface for data upload, model training, evaluation, and visualization.
LLaMA via Ollama API: Powers an Educational Mode that delivers natural-language explanations of datasets, ML processes, model outputs, and feature importance.
Methodology:
Data Collection & Preprocessing: Users upload structured datasets (CSV). AIMEX handles missing values, categorical encoding, high-cardinality feature removal, and temporal alignment. Data is split into training and testing sets.
Feature Engineering & Automated Modeling: TPOT explores preprocessing, feature selection, dimensionality reduction, and algorithms to optimize pipelines. Users can control generations, population size, and parallel jobs.
Model Evaluation: Uses metrics like Accuracy, Precision, Recall, F1-Score (classification) and R², MSE, MAE (regression). Visualizations are provided via Streamlit. Benchmark datasets confirm reliable performance.
Explainability with SHAP: Provides global and local explanations of feature impacts on predictions, addressing “black-box” concerns.
Educational Mode: LLaMA explains ML concepts and results in simple language, allowing users to query datasets, preprocessing steps, model outputs, and SHAP interpretations.
Code Export & Reproducibility: Optimized TPOT pipelines can be exported as standalone Python scripts with preprocessing, model, and documentation, enabling reproducible research and further customization.
Impact:
By integrating automation, interpretability, and educational support, AIMEX empowers researchers, students, and domain experts to develop, understand, and share ML models without extensive programming skills, fostering responsible and reproducible machine learning practices.
Conclusion
AIMEX integrates automation, interpretability, and accessibility within a unified web-based framework. By combining TPOT for AutoML, SHAP for model explainability, Streamlit for an intuitive interface, and LLaMA for educational guidance, AIMEX bridges the gap between complex ML processes and user-friendly learning.Through its modular and low-code design, AIMEX simplifies model creation, interpretation, and export while maintaining transparency and reproducibility. The use of SHAP visualizations enhances understanding of feature importance, and the Streamlit interface ensures ease of use across academic and research settings. AIMEX’s architecture supports adaptability, enabling future integration of new algorithms and educational tools.In conclusion, AIMEX represents a step forward in democratizing machine learning by combining automation and explainability with an educational perspective. It promotes responsible, interpretable, and accessible AI, empowering students, educators, and researchers to explore and apply ML effectively in both academic and experimental environments.
References
[1] Heistrene, L., Belikov, J., Baimel, D., Katzir, L., Machlev, R., Levy, K., Mannor, S., & Levron, Y. (2025). An improved and explainable electricity price forecasting model via SHAP?based error compensation approach. IEEE Transactions on Artificial Intelligence, 6(1), 159–168.
[2] Olson, R. S., Bartley, N., Urbanowicz, R. J., & Moore, J. H. (2016). Evaluation of a tree?based pipeline optimization tool for automating data science. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO).
[3] Jialun, P., Zhao, Z., & Han, D. (2025). Academic performance prediction using machine learning approaches: A survey. IEEE Transactions on Learning Technologies, 18, 351–360.
[4] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.
[5] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., et al. (2019). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
[6] Khan, M. S., Peng, T., Akhlaq, H., & Khan, M. A. (2025). Pose?based feature extraction and ML for early cerebral palsy prediction. IEEE Access, 13, 12,345–12,360.
[7] Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems (NeurIPS).
[8] Marques, A. G., Mateos, G., & Ribeiro, A. (2022). Automated machine learning: An overview of methods, challenges, and opportunities. IEEE Signal Processing Magazine.
[9] Molnar, C. (2019). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.(Book)