Early and precise identification of colon cancer plays a crucial role in enabling timely treatment and enhancing patient survival rates. Manual analysis of colonoscopy images is often time-consuming and subject to variability among clinicians.
This study proposes an automated deep learning–based framework that performs colon cancer segmentation and image generation by employing Attention U-Net and the Pix2Pix Generative Adversarial Network (GAN). The Attention U-Net facilitates accurate delineation of cancer-affected areas, whereas the Pix2Pix GAN produces realistic synthetic images to improve dataset variability. A Sine Cosine Algorithm (SCA) is applied for hyperparameter optimization to improve model performance. Experimental results indicate improved segmentation accuracy and generalization, demonstrating the potential of the proposed system to support reliable and efficient colon cancer diagnosis.
Introduction
Colon cancer has high mortality but significantly better outcomes with early detection. Colonoscopy is the primary diagnostic tool, yet manual image analysis is time-consuming, subjective, and error-prone, especially for small or unclear lesions. Deep learning, particularly CNN-based models like U-Net, has improved medical image segmentation, but performance is often limited by small and less diverse medical datasets.
To address these challenges, the proposed work presents a unified framework that integrates Attention U-Net, Pix2Pix GAN, and the Sine Cosine Algorithm (SCA). Attention U-Net enhances segmentation by focusing on clinically relevant regions, improving accuracy and boundary detection. Pix2Pix GAN generates realistic synthetic colonoscopy images to augment data, reduce overfitting, and improve model generalization. SCA is used to optimize hyperparameters such as learning rate and batch size, ensuring efficient training and better convergence.
The methodology includes data collection, preprocessing, augmentation, segmentation, image synthesis, and optimization. The system is evaluated using Dice Coefficient and Intersection over Union (IoU), along with visual analysis of segmentation and synthesized outputs. Results demonstrate improved robustness, accuracy, and generalization, supporting clinicians with reliable AI-assisted tools for early colon cancer detection and improved clinical decision-making.
Conclusion
This study introduces an effective deep learning–based framework for automated colon cancer segmentation that integrates an Attention U-Net architecture with Pix2Pix GAN–based image synthesis and Sine Cosine Algorithm–based optimization. The proposed model demonstrates stable training behavior and reliable segmentation performance when trained for 50 epochs on colonoscopy images resized to 256 × 256 resolution.
Experimental results show a training accuracy of 93% and a validation accuracy of 82%, with controlled training and validation loss values of 0.09 and 0.12, respectively, indicating good generalization and minimal overfitting.
The integration of GAN-generated synthetic data improves robustness by addressing data scarcity, while attention mechanisms enable precise localization of cancerous regions with clear boundary delineation. The experimental results confirm that the proposed method delivers accurate and visually interpretable segmentation while maintaining reasonable computational complexity, making it suitable for practical medical image analysis. Overall, this study highlights the potential of combining segmentation, image synthesis, and optimization techniques to support reliable and efficient colon cancer diagnosis.
The incorporation of Pix2Pix GAN–generated synthetic images plays a significant role in mitigating data scarcity and enhancing model robustness, while the attention mechanism enables the network to focus on diagnostically important regions and suppress irrelevant background features. Moreover, the selected image resolution and optimized training strategy strike a balance between segmentation accuracy and computational efficiency, making the framework suitable for practical deployment. The implementation and deployment of the model through a web-based interface further demonstrate its applicability in real-world clinical decision-support systems.
References
[1] L. Zhang, Y. Liu, and M. Chen,“Synthetic colorectal lesion generation using Wasserstein GAN,”IEEE Access, 2024.
[2] P. Fernandez, R. Morales, and J. Ortega,“Colon polyp image synthesis using attention-guided GAN,”Biomedical Signal Processing and Control, 2024.
[3] S. Nakamura, T. Ito, and K. Tanaka,“Data-efficient colorectal cancer detection using GAN-augmented datasets,”Computer Methods and Programs in Biomedicine, 2024.
[4] E. Rossi, F. Conti, and L. Bianchi,“Unpaired colonoscopy image synthesis via Cycle-Consistent GAN,”Medical Image Analysis, 2024.
[5] D. Müller, A. Hoffmann, and C. Weber,“High-fidelity colorectal histopathology image synthesis using StyleGAN3,” IEEE Transactions on Medical Imaging, 2025
[6] R.Al-Mansoori, S. Khan, and M. Iqbal,“Conditional GAN-based colon cancer image augmentation for CNN training,”Pattern Recognition Letters, 2024.
[7] I. Petrov, N. Ivanova, and A. Smirnov,“GAN-based synthetic colon images for improving polyp segmentation,”Expert Systems with Applications, 2025.
[8] J. Hernández, C. Silva, and P. Costa,“Multi-domain GAN for cross-dataset colorectal image synthesis,”Neural Computing and Applications, 2025.
[9] K. Lee, J. H. Park, and S. Min,“Diffusion-assisted GAN framework for realistic colorectal cancer image generation,”Artificial Intelligence in Medicine, 2025.
[10] A. El-Sayed, M. Noor, and H. Abdelrahman,“Privacy-preserving synthetic colon cancer image generation using GANs,”Computers in Biology and Medicine, 2025.