Style Fader Generative Adversarial Networks for Style Degree Controllable Artistic Style Transfer Zhiwen Zuo , Lei Zhao * , Shuobin Lian , Haibo Chen , Zhizhong Wang , Ailin Li , Wei Xing * and Dongming Lu College of Computer Science and Technology, Zhejiang University {zzwcs, cszhl, lshuobin, cshbchen, endywon, liailin, wxing, ldm}@zju.edu.cn Abstract Artistic style transfer is the task of synthesizing content images with learned artistic styles. Re- cent studies have shown the potential of Genera- tive Adversarial Networks (GANs) for producing artistically rich stylizations. Despite the promising results, they usually fail to control the generated images’ style degree, which is inflexible and lim- its their applicability for practical use. To address the issue, in this paper, we propose a novel method that for the first time allows adjusting the style de- gree for existing GAN-based artistic style transfer frameworks in real time after training. Our method introduces two novel modules into existing GAN- based artistic style transfer frameworks: a Style Scaling Injection (SSI) module and a Style Degree Interpretation (SDI) module. The SSI module ac- cepts the value of Style Degree Factor (SDF) as the input and outputs parameters that scale the feature activations in existing models, offering control sig- nals to alter the style degrees of the stylizations. And the SDI module interprets the output proba- bilities of a multi-scale content-style binary clas- sifier as the style degrees, providing a mechanism to parameterize the style degree of the stylizations. Moreover, we show that after training our method can enable existing GAN-based frameworks to pro- duce over-stylizations. The proposed method can facilitate many existing GAN-based artistic style transfer frameworks with marginal extra training overheads and modifications. Extensive qualitative evaluations on two typical GAN-based style trans- fer models demonstrate the effectiveness of the pro- posed method for gaining style degree control for them. 1 Introduction Since the seminal work of Gatys et al. [2016], artistic style transfer has seen a booming development in recent years due to its scientific and artistic values. However, most exist- ing artistic style transfer methods heavily depend on VGG * Corresponding authors network [Simonyan and Zisserman, 2014], pre-trained on ImageNet [Deng et al., 2009], which requires extensive la- beled images and can introduce an extra bias [Geirhos et al., 2018], since the pre-trained network has no access to artis- tic images during training. In contrast, the methods based on Generative Adversarial Network (GAN) [Goodfellow et al., 2014] directly leverage the collection of artistic images to learn the style representation, such as the works of [Li and Wand, 2016b; Elgammal et al., 2017; Zhu et al., 2017; Sanakoyeu et al., 2018; Huang et al., 2018; Lee et al., 2018; Kotovenko et al., 2019a; Svoboda et al., 2020; Chen et al., 2020; Kotovenko et al., 2019b]. In essence, these methods follow the paradigm of image-to-image translation (I2I) [Isola et al., 2017], where a translated image not only preserves its original content but also is stylized. Thanks to the rapid development of GANs, GAN-based artistic style transfer methods have shown great success in producing vi- sually appealing stylizations. Despite the promising results, existing GAN-based artistic style transfer methods can not flexibly control the style de- gree of the generated images. Artistic style transfer, however, is a very subjective task. A thousand people may have a thou- sand preferences for the stylizations. Therefore, a practical method allowing users to control the style degree of the styl- izations for existing GAN-based artistic style transfer frame- works in real time after training is of great value. Nonethe- less, real-time style degree control for existing GAN-based artistic style transfer models is a non-trivial problem. The challenges mainly lie in two aspects. On the one hand, un- like those methods based on well-defined style losses (e.g., the Gram loss [Gatys et al., 2016] or the first and second or- der moment matching losses [Huang and Belongie, 2017]), which can explicitly control the strengths of the stylizations by adjusting the weights of the style losses [Gatys et al., 2016; Babaeizadeh and Ghiasi, 2018] or mixing content and style features in the latent space [Huang and Belongie, 2017; Park and Lee, 2019], GAN-based artistic style transfer meth- ods lack a mechanism to parameterize the style degree of the stylizations. Since the stylization is the result of adversarial training in GAN-based models, a possible solution is to adjust the weight of adversarial loss in existing GAN-based frame- works, hoping to produce different stylizations with different style degrees accordingly. But such solution has two fatal de- fects: 1) as the adversarial training encourages the generated Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) Special Track on AI, the Arts and Creativity 5002