Mobile Style Transfer With Image-to-Image Translation

Abdulkader Helwan
4 min readJan 11, 2024

In this article, we discuss the concepts of conditional generative adversarial networks (CGAN).

Here we do a brief overview of image-to-image translation and generative adversarial learning.

This is a series of articles discussing Image-to-Image Translation using CycleGAN. Find the Next article here.

Introduction

In this series of articles, we’ll present a Mobile Image-to-Image Translation system based on a Cycle-Consistent Adversarial Networks (CycleGAN). We’ll build a CycleGAN that can perform unpaired image-to-image translation, as well as show you some entertaining yet academically deep examples.

In this project, we’ll use:

We assume that you are familiar with the concepts of Deep Learning as well as with Jupyter Notebooks and TensorFlow. You are welcome to download the project code.

Image-to-Image Translation

Style transfer is built using image-to-image translation. This technique transfers images from source domain A to target domain B. What does that mean, exactly? To put it succinctly, image-to-image translation lets us take properties from one image and apply them to another image.

Image-to-image translation has a few interesting (and fun!) applications, such as style transfer that can start with a photo taken in summer and make it look like it was taken in winter, or vice-versa:

Or making horses look like zebras:

Image-to-image translation also powers deep fakes, which let you digitally transplant the face of one person onto another. So if you’ve ever wondered what it would look like if Luke Skywalker were played by Nicholas Cage, image-to-image translation can help you find it.

--

--