Test automation significantly enhances the efficiency, speed, and repeatability of complex and time-consuming manual software testing tasks. However, due to the high cost associated with developing and maintaining automated tests, it is essential to prioritize which test cases to automate first. This paper presents a straightforward yet effective approach for prioritizing test cases based on the effort required for both manual execution and automation. The proposed method is highly adaptable, supporting various assessment techniques and allowing for the dynamic addition or removal of test candidates. The theoretical concepts outlined have been successfully implemented in real-world scenarios across multiple software companies. Applications include testing real estate platforms, cryptographic and authentication systems, and OSGi-based middleware frameworks used in smart homes, connected vehicles, industrial automation, medical devices, and other embedded systems.
Introduction
Test automation significantly reduces testing time and increases coverage by providing fast, consistent, and repeatable results. However, due to limited resources and the high cost of automation development and maintenance (3 to 15 times more expensive than manual testing), automating all tests is impractical and inefficient without a clear prioritization strategy.
Three main strategies to optimize manual regression testing are test suite minimization, test case selection, and test case prioritization—the latter being the focus of this paper. Some tests inherently require automation (e.g., load/performance and API tests), while others needing human judgment remain manual. Various factors influence the decision to automate, such as resource availability, environment complexity, interdependencies, timelines, and regulations.
A practical prioritization model is proposed using a visual Cartesian coordinate system plotting manual effort against automation effort. Tests with high manual effort but low automation effort yield the best ROI and should be automated first. This model is adaptable to real-world complexities like test dependencies and evolving applications, making it suitable for Agile and safety-critical environments.
The paper introduces an Automation Efficiency Quotient (manual effort divided by automation effort) to quantify test prioritization. Estimating effort is challenging but manageable through expert techniques like Planning Poker, analogy-based models, and Execution Points, which quantify manual test complexity.
Factors influencing manual and automation efforts are broken down with weighted percentages, such as execution time, repetition, environment complexity, test data, maintenance, and stability of requirements and features. Accurate and regular re-estimation ensures priorities remain aligned with ROI.
The approach helps teams optimize automation efforts, improve software quality, shorten feedback cycles, and reduce costs by focusing on high-impact, cost-effective test cases first.
Conclusion
This paper introduces a straightforward and adaptable method for prioritizing manual software test cases for automation. Unlike traditional approaches, it emphasizes an effort-based assessment model that is both intuitive and customizable. The factors influencing this assessment are system-specific and can be weighted differently depending on project needs. The proposed method stands out for its flexibility—it supports various evaluation techniques and allows for the dynamic inclusion or removal of test candidates. This adaptability makes it suitable for evolving software environments. While the specific factors, weights, and quotient values used in the prioritization process can be refined over time with broader adoption and data collection, the core principle of the approach remains robust and effective.
References
[1] L. Kashyap, “Intelligent automation in software testing,” Int. J. Adv. Res. Sci. Commun. Technol., vol. 5, no. 5, 2025. [Online]. Available: https://doi.org/10.48175/IJARSCT-27788
[2] Kashyap, L. (2025). Intelligent automation in software testing. International Journal of Advanced Research in Science, Communication and Technology (IJARSCT), 5(5). https://doi.org/10.48175/IJARSCT-27788
[3] A. Bertolino, “Software testing research: Achievements, challenges, dreams,” in Future of Software Engineering (FOSE \'07), Minneapolis, MN, USA, 2007, pp. 85–103.
[4] M. Fewster and D. Graham, Software Test Automation: Effective Use of Test Execution Tools, Boston, MA, USA: Addison-Wesley, 1999.
[5] G. Myers, C. Sandler, and T. Badgett, The Art of Software Testing, 3rd ed., Hoboken, NJ, USA: 5iley, 2011.
[6] A. Memon, “An event-flow model of GUI-based applications for testing,” Software Testing, Verification and Reliability, vol. 17, no. 3, pp. 137–157, Sep. 2007.