Project, Research

Neural Networking for Dummies 17: Improving my David Rose bot

So I FINALLY had success training my first neural network, although the results definitely made very little sense. So now it is time to improve it!

Funny, but mostly nonsense
Alright, this is probably a legit line he said

First I needed to look up how to save the output, as it was just ending up printing in my terminal. I found out that typing “&>> output.txt” after the python script in the terminal writes it to a text file.

'/home/catjitsu/.local/lib/python3.6/site-packages/Neural Network/'  &>> output.txt

According to the original neural network article, there are a few ways I can make the output better.

First, I can increase the number of epochs that I use to train the file. The greater the number of epochs, the more times the neural network studies the input file. I started with the suggestion of 5, and since it takes barely any time I increased to 50 just to see what happens.

Well that is certainly an improvement

The second thing I can tinker with is the temperature setting in the output script file. Essentially, it is the variable controls how creative the network is when creating output. According to the article 0.1 temperature makes straight forward (and possibly boring) output, while 1.0 is so creative it may make up words. Increasing it past 1.0 might create some truly outlandish statements. Right now, the output script is set at 0.5, and the output seems like it might be just plagiarizing lines I fed in.

Still hilarious though

I decided to make scripts that would run for 0.1, 1.0, and 1.5 and I was not disappointed in the results. 0.1 was truly boring, but at the same time oddly poetic. My favorite part was (Screaming repeatedly) that appeared in the middle of a sea of I don’t know.

My AI David Rose’s first poem, I’m going to hang this on the refrigerator

The 1.0 was a bit more interesting, and had a few gems that I turned into memes.

And the 1.5 did not disappoint. It was so creative, it decided to not even bother using the English alphabet at times. Some of it came out just downright creepy.

Ok, I don’t think I’ll be using above 1.0 again

The last thing I can do to make my neural network better is the most obvious: I can simply increase the number and quality of the lines in my input file that I used to train the network. The more data the network has to study about David Rose, the more it can learn to talk like him. And in my haste to train a network, I put in data that I didn’t thoroughly review to remove non-David lines, so I definitely have room for improvement here.

Fairly certain Moira Rose was the one talking about her diamonds

My next step is to automate the process of generating a random David Rose meme. In order to do that, I will need to make a script that will grab an image online of David Rose and then insert a randomly generated output on it.

If I can get this to work with David Rose, I feel I could get this to work with anything, so I might try doing this to other characters. Or maybe I’ll just generate a bunch of fake tweets from different public accounts for fun.

Eventually I want to use this to create randomly generated content for my Instagram with pictures of my cats on my hard-drive. This way I can focus on taking cool pictures and throwing my favorites in a folder, and scripts can handle publishing captions and sharing photos.

But for now, looks like I’ll have to keep sharing cat pictures the old way. So enjoy!

Leave a Reply