Booksby.ai

Through my internship last semester, me and Andreas Refsgaard created this fun project called Booksby.ai together. Our initial thoughts was to make a whole website, just filled with AI-generated items. Items that would confuse the hell out of the visitor because it would look real, but not. 

After some brainstorming and playing around with different models and datasets, we finally came up with the idea of an online bookshope, where ALL content would be generated by Artificial Intelligence. 

This means that both author names, book titles, descriptions of the books, reviews, reviewers, book covers and content would be made by AI. 

We ended up having a lot of fun and learning quite a lot of the different artsy ML algorithms out there. 

Booksby.ai ended up becoming an online bookstore which sells science fiction novels. From the description on the website:

Through training, the artificial intelligence has been exposed to a large number of science fiction books and has learned to generate new ones that mimic the language, style and visual appearance of the books it has read.

None of the stories, titles, descriptions, book covers or reviews related to any of the books on Booksby.ai has been written or designed by humans.

All books on Booksby.ai are for sale on Amazon.com and can be ordered as printed paperbacks.

The stories, titles, description and reviews of the books were generated using char-rnn-tensorflow and training data from Amazon.com and Project Gutenberg.

The covers for the books were generated using Progressive Growing of GANs and training images from OpenLibrary.

Images of people reviewing the books were created using transparent latent gan.
A model that calculated prices for the generated books was made using ml5js.org regression with feature extractor and training data (book covers + prices) from Amazon.com.

The project is made by Andreas Refsgaard and Mikkel Thybo Loose.

Snapshots of the website showing some of the AI generated books

Installing tensorflow-gpu, CUDA 10.0 and cuDNN for Gtx 1080

So following my own successful installation on my Razer Blade 15 with NVIDIA GTX 1070 (Max Q), the word spread..

My friend invited me to help install Ubuntu, dual booted with Windows 10, on his MSI GT63 Titan 8RG with a sweet ass 1080 GTX GPU. So being confident as hell giving my own success, I thought it was easy peasy pie.. But no. Apparently for the 1080 it requires different drivers. 

We had some issues trying to install tensorflow 1.8.0 GPU with CUDA 9.2 and cuDNN 7.1.4, which resulted in tensorflow gpu actually working fine, BUT the NVIDIA drivers installed along the way were not functioning on the GPU, which led to a 800×600 constant resolution. #lofi

So instead we had to go for CUDA 10.0, which seemed to work out fine for the NVIDIA 1080 GTX.

www.python36.com was once again the saviour with this amazing tutorial

Thanks fam <3 

Sorting image data for GAN training

GANs need a whole lot of image data to output proper results. Therefore one should consider oneself lucky, if one comes upon a highly uniform and standardised image dataset for GAN training. 

However if one is not that lucky, then a nice way to sort shitty image data from not-so-shitty image data, is to train a simple classifier on a subset of the images in the dataset. 

So to sort good from bad, I trained a MobileNet model in Keras to distinguish between what I considered good images and bad images. After only 10 epochs, the model got 85 percent accuracy on the validation images. That could possibly get better, so I retrained a VGG16 to try. 

To see how to retrain your own Keras models and save them for later use, check out my quick and easy notebook here. Also check out DeepLizard’s playlist on Keras on youtube, amazing tutorial! 

I also increased the size of the dataset by flipping every single image vertically using PIL, so I now had a dataset double the original size (omg). Here is the script for that. After retraining the VGG16, I now got a 99 % accuracy on the validation set and I finally reached satisfaction. 

And so I made a script that ran through the whole dataset, predicted the good images from the bad, and sorted the respective images in new folders. One with the good ones and one with the bad ones. From here, some rough mistakes of the model could quickly be moved into the right folder, and the GAN now have some pretty sweet uniform data to munch on. 

Hare krishna!

When training Keras models crashes your GPU

So for the last couple of weeks I have been trying to retrain different models to try and improve my Keras know-how and also to be able to use my own models in other applications (such as object recognition live on the webcam), omg yes! 

Of course along the way I had different struggles with overfitting and what not, but mostly when using larger models my session would crash. Over and over. I thought it was exhausting the CPU and not using the GPU, but after further investigation I came upon others on the world wide web, who faced a similar issue. 
My most pressing error would occur when loading in a model, using keras’ model.load() – and it seems it is due to a memory leak. So simply running keras.backend.clear_session() cleared up the memory and made the notebook run smoothly again. Check out the solution in this git thread.

Thanks again internet! 

DCGAN-tensorflow implementation

 

A few months ago I first started experimenting with GANS. How amazing I thought. Andreas Refsgaard, who I am doing my internship in collaboration with, showed me this grand world of GANS. We wanted to start playing around with this crazy thing, so through Paperspace, I started running the tensorflow implementation of the DCGAN from this github repo.

This also was a fun exercise in finding and scraping the web for nice images to use as my training data. I found several ways of scraping images, but the straight away easiest one was Fatkun Batch Downloader, that works as a plugin in Chrome. From this I was able to download loads of images from Google and also pick and choose which was to drop or keep. 

However this solution doesn’t work for training your GAN, since it needs standardised and indistinguishable data to actually output something that looks the same. 

I ended up trying out with book covers, since they were easy to come by. I was using this site, which allowed me to download huge amounts of book covers in different sizes. One simple wget command through the terminal landed me 10000 images. 

So after training a DCGAN on these images for 150 epochs I was left with the below output images, which I think looks pretty freaking kool. 

Some overfitting is shown in the output, already after around 90 epochs this starts to happen.
However… stay tuned, cuz next time we will see how Mikkel uses the Progressive Growing of GANS to train the book covers and achieve much better results. 

Peace out 🙂 

Click the images for a larger version <3

Installing tensorflow-gpu for Ubuntu 18.04 with CUDA 9.2 and cuDNN 7.1.4

So the proces of actually installing the NVIDIA driver that fit my GTX-1070 GPU was a waaay easier step than actually setting it up with tensorflow-gpu, for running machine learning models on the GPU instead of the CPU. After all this was the main point of getting the Razer Blade 15.

So after trying several online tutorials and getting error after freaking error, I uninstalled everything I tried to install already and started with this tutorial, which saved me.

Also this dude, copying the exact same tutorial, gave some pretty good hints how to avoid certain errors along the way.

I ran into more errors though. When trying to install tensorflow, at around step 13, a fun little error about some “NCCL-SLA.txt” introduced itself. This error occurred, apparently because that file didn’t exist. But fixed it by duplicating the “LICENCE.txt” file inside of /usr/local/cuda-9.2/targets/x86_64-linux and changing the duplicates name to “NCCL-SLA.txt”.

Afterwards when actually trying to run the pip install tensorflow*.whl, some “Permission Denied” error popped up. I later found out (after 1 hour on google) that I simply had to run the command with the --user flag like so: pip install tensorflow*.whl --user

What a great day.

See you around <3

Dual boot on Razer Blade 15 – Windows 10 / Ubuntu

So just got the new Razer Blade 15, and to set it up for both creative stuff (e.g. ableton) and machine learning I decided to make it dual boot between Ubuntu and Windows 10.
Googling around it looked like it could be trouble for the new Razer Blade to install Ubuntu, but following this tutorial on youtube it was actually pretty straight forward. A few glitches apeared, but managed to install it without errors.

After installing Ubuntu on 100 Gb partition on my C-drive I needed to set up the NVIDIA graphics drivers. One quick search gave me this tutorial, which easily helped me set up the system with the NVIDIA drivers. One thing to add though, is the fact that you need to specify “sudo apt install nvidia-drivers-390” instead of “nvidia-390”.

Now off to install CUDA and tensorflow-gpu to make sure future tensorflow models will run on my new kickass GPU.

Praise be.