ComboGAN: Unrestrained Scalability for Image Domain Translation Asha Anoosheh, Eirikur Augustsson, Radu Timofte, Luc van Gool In Arxiv, 2017.
This year alone has seen unprecedented leaps in the area of learning-based image translation, namely CycleGAN, by Zhuet al. But experiments so far have been tailored to merely two domains at a time, and scaling them to more would re-quire an quadratic number of models to be trained. And with two-domain models taking days to train on current hardware,the number of domains quickly becomes limited by the time and resources required to process them. In this paper, we pro-pose a multi-component image translation model and training scheme which scales linearly - both in resource consumption and time required - with the number of domains. We demonstrate its capabilities on a dataset of paintings by 14different artists and on images of the four different seasons in the Alps. Note that 14 data groups would need(14choose2) =91 different CycleGAN models: a total of 182 genera-tor/discriminator pairs; whereas our model requires only 14generator/discriminator pairs
UNIT: UNsupervised Image-to-image Translation Networks : https://github.com/mingyuliutw/UNIT