In order to increase the quality of the composite image throughout the image style transfer process. This study presents an improved style loss function-based image style transfer method: the improved Gram matrix calculates the inner product of the feature map and the spatial transformation map, and then the new style loss function. At the same time, the weighted algebraic sum of the two loss functions is utilized as the neural network\'s total loss function, which is merged with the content loss function. To generate the style-transferred image, the gradient descent algorithm is employed to iteratively optimize.
Image style transfer has gotten a lot of attention as a new study topic in image processing. Transfer of image style Image style transfer converts the style of different images to achieve the aim of the original image transfer style, based on the assumption that the semantic content of the original image remains the same. The picture style, on the other hand, is a nebulous idea. Everyone interprets the style of different photographs, or even the same image, differently. The problem of this research direction is figuring out how to user computer to more properly characterize the style of an image. Texture transfer can be thought of as transferring the style from one image to another. The purpose of texture transfer is to create a texture from a source image while restricting the texture synthesis so that the semantic content of the target image is preserved. The algorithm enables user to create new high-quality photos that mix the content of any photograph with the appearance of a variety of well- known artworks. Our findings shed light on how Convolutional Neural Networks build deep image representations and show how they can be used for high-level image synthesis and modification.
To develop more transparent and quality improved of images in neural style transfer field. Neural Style Transfer can be applied in commercial production and its feasible development prospects.
Neural Style Transfer deals with two sets of images: Content image and Style image.
This technique helps to recreate the content image in the style of the reference image. It uses Neural Networks to apply the artistic style from one image to another.
Neural style transfer opens up endless possibilities in design, content generation, and the development of creative tools.
III. RELATED WORK OR LITERATURE SURVEY
A. Image Style Transfer Algorithm Based on Semantic Segmentation? Author: chuan xie1, zhizhong wang2, haibo chen2, xiaolong ma3, Wei xing2, lei zhao 2, Wei song4, and zhijie lin4
We propose an image style transfer algorithm based on semantic segmentation to resolve semantic mismatching in image style transfer Our algorithm builds a semantic segmentation network based on mask R-CNN, introduces semantic information, and then makes style transfer on the patch level, realizes the style transfer between similar objects
B. Deep Learning Cross-Phase Style Transfer for Motion Artifact Correction in Coronary Computed Tomography Angiography
Author: sunghee jung1, 4 (member, ieee), soochahn lee2 (member, ieee), byunghwan jeon3, yeonggul jang1, and hyuk-jae chang. We apply a style transfer method to 2D image CB = Pre-process patches cropped from full-phase 4D computed tomography (CT) to synthesize these images. We then train a convolutional neural network (CNN) for motion artifacts correction using this synthetic ground-truth (Syn GT). During testing, the output motion corrected 2D image patches of the trained network are reinserted into the 3D CT volume with volumetric interpolation.
The proposed method is evaluated using both phantom and clinical data.new picture by the combination. In this paper, we show the general steps of image style transfer based on convolutional neural networks through a specific example, and discuss the future possible applications
Huge database can lead to more time consumption to get the information.
Search the required information from available in Datasets.
User gets result very fast according to their needs.
D. Space Complexity
The space complexity depends on Presentation and visualization of discovered patterns. More the storage of data more is the space complexity.
E. Time Complexity
Check No. of patterns available in the datasets= n
If (n>1) then retrieving of information can be time consuming. So the time complexity of this algorithm is O (nn).
Above mathematical model is NP-Complete.
V. EXISTING SYSTEM AND DISADVANTAGES
Stroke-based rendering (SBR) is a method of creating non-photorealistic images by placing discrete objects termed strokes, such as paint strokes or stipples, on a computer screen. The painting's style is achieved by enhancing the SBR algorithm which starts with a photo and places a sequence of strokes where the photo is matched and then shown, just as it was created with oil paints. However, these methods have revealed a number of issues, including the painting models, weighting parameters, and the selection of the input images that must be regulated. The UB = Predict outcome =Failures and Success conditions.
Outcomes are far too reliant on individual judgement. Every image can get high-quality results using the texture Creation approach by carefully setting its parameters. Iterative optimization is inherently unstable, and as the output image size grows larger, the synthesis process gets even more so.
A. Proposed System And Advantages
This proposed system used the VGG-19 network, which is pre-trained on the Image Net database, to be the first to introduce neural style transfer (NST). The algorithm's main idea is to compute style loss using the Gram matrix (i.e., use Gram matrix to represent style feature of an image). They generate a random white noise image after inputting a content image and a style image. Then compute the Gram matrix lG, where lG is the inner product of two sets of vectorized feature maps and l lC lG R, where lC is the number of filter channels of layer l.
Secure and efficient system.
Improve image style transfer technique.
The layering of DCNNs is what makes them so powerful. A DCNN processes the Red, Green, and Blue parts of an image simultaneously using a three-dimensional neural network. When compared to standard feed forward neural networks, this significantly reduces the number of artificial neurons necessary to process an image. Images are fed into deep convolutional neural networks, which are then used to train a classifier. Instead of matrix multiplication, the network uses a particular mathematical process known as "convolution." A convolutional network's architecture typically consists of four layers: convolution, pooling, activation, and fully connected.
A. ??????????????Convolutional Layer
A Convolution: Takes a set of weights and multiplies them with inputs from the neural network.
Kernels or Filters: During the multiplication process, a kernel (applied for 2D arrays of weights) or a filter (applied for 3D structures) passes over an image multiple times. To cover the entire image, the filter is applied from right to left and from top to bottom.
Dot or Scalar Product: A mathematical process performed during the convolution. Each filter multiplies the weights with different input values. The total inputs are summed, providing a unique value for each filter position.
B. Activation Layer for ReLU
The convolution maps are then routed via a nonlinear activation layer like Rectified Linear Unit (ReLu), which replaces negative integers in the filtered pictures with zeros.
C. Pooling Layer
The pooling layers shrink the image over time, preserving just the most crucial details. For each set of four pixels, for example, the pixel with the highest value is kept (this is known as max pooling), or only the average is kept (average pooling). By lowering the number of calculations and parameters in the network, pooling layers aid in the management of overfitting. There is a standard multi layer perceptron or "fully connected" neural network at the end of the network after numerous iterations of convolution and pooling layers (this may happen thousands of times in some deep convolutional neural network topologies).
???????D. Fully Connected Layer
There are numerous completely linked layers in many CNN topologies, with activation and pooling layers in between. Convolution and pooling layers have filtered, rectified, and reduced the image's flattened pixels, which are sent into fully linked layers as an input vector. The softmax function is applied to the outputs of the fully connected layers at the end, yielding the probability of the picture belonging to a class - for example, is it a car, a boat, or an aeroplane.
By modifying the operating rules of the style loss function, the picture style transfer technique based on the improved style loss function addresses the problem. This method increases image quality while also transferring image style. The Gram matrix recovers the global static of the image when employing neural networks for style transfer; however it does not properly extract the relationship between adjacent pixels of the same image. The similarity between local characteristics and nearby features is computed using the modified Gram matrix. Solve the issue of the image quality being bad and the details of style attributes not being clear.
 Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE); Image and Video Processing (eess.IV); Machine Learning (stat.ML), Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, Mingli Song, 30 Oct 2018 (this version, v7).
 Image Generation with neural style transfer and tensor flow, Creation of unique images using machine learning algorithms, Marco Sanguineti 2 Jul 2021
 Neural Style Transfer: A Critical Review (IEEE Access), Akhil Singh; Vaibhav Jaiswal; Gaurav Joshi; Adith Sanjeeve; Shilpa Gite; Ketan Kotecha, 15 September 2021
 Introduction and Implementation to Neural Style Transfer – Deep Learning Anany Sharma — October 22, 2020
 Neural Style transfer: Everything you need to know, Pragati Baheti (Microsoft), 29 November 2021
 Image Style Transfer Method Based on Improved Style Loss Function, Hanmin Ye1 , Wenjie Liu Yingzhi Liu1, 18 May 20217. 2349-3585 Title- Web System (Volume 5, Issue 6, June 2016), IJRDT