DCGAN-tensorflow implementation

 

A few months ago I first started experimenting with GANS. How amazing I thought. Andreas Refsgaard, who I am doing my internship in collaboration with, showed me this grand world of GANS. We wanted to start playing around with this crazy thing, so through Paperspace, I started running the tensorflow implementation of the DCGAN from this github repo.

This also was a fun exercise in finding and scraping the web for nice images to use as my training data. I found several ways of scraping images, but the straight away easiest one was Fatkun Batch Downloader, that works as a plugin in Chrome. From this I was able to download loads of images from Google and also pick and choose which was to drop or keep. 

However this solution doesn’t work for training your GAN, since it needs standardised and indistinguishable data to actually output something that looks the same. 

I ended up trying out with book covers, since they were easy to come by. I was using this site, which allowed me to download huge amounts of book covers in different sizes. One simple wget command through the terminal landed me 10000 images. 

So after training a DCGAN on these images for 150 epochs I was left with the below output images, which I think looks pretty freaking kool. 

Some overfitting is shown in the output, already after around 90 epochs this starts to happen.
However… stay tuned, cuz next time we will see how Mikkel uses the Progressive Growing of GANS to train the book covers and achieve much better results. 

Peace out 🙂 

Click the images for a larger version <3

Skriv et svar

Din e-mailadresse vil ikke blive publiceret. Krævede felter er markeret med *