MonthDecember 2015

2015 Winter AI Project – Wall Avoider

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

During the holiday break, I decided to dedicate my programming efforts towards learning Neural Networks. I knew a little bit about them before hand, like the architecture and the theory of how they worked, but had no idea how to implement them into my own projects. The wall avoider project was dedicated towards making a very, very simple Neural Network that progressively got better at avoiding walls using a genetic algorithm.

But first, I will briefly explain what Neural Networks are. In short, they are systems modeled after the human brain – that is, they use a set of ‘neurons and synapses’ that manipulate a given set of input data so that a desired output is reached. The issue is, however, that we do not know the correct weights and values necessary so that the network will give us our desired output. There are several ways of determining error/finding the best set of values so that a desired output is reached. For this project, I’ll be using a genetic algorithm to essentially determine the best network, and ‘breed’ it with other good networks, resulting in an equally if not better Neural Network. This occurs by choosing 2 Neural Networks that result in a long run (doesn’t hit the wall) and randomly choosing weights from both networks to result in a new Neural Network.

neural-network

Anatomy of a Neural Network.

My project’s Neural Network has 3 input nodes and 1 output node. The value of the input nodes is either a 1 or a 0, depending on whether or not its respective ‘eye’ (line extending from the center of the runner) hits a wall or not. These values are each multiplied by their respective weights (random at first, then altered through the genetic algorithm) and summed together, then put through an activation function. In my example I used a variation of a sigmoid function (1 / (1 + e^(-x)) as an activation function. Sigmoid functions are nice because their ranges are typically (0,1), resulting in either an ‘on’ or an ‘off’ setting. If the a(x) > 0.5, it turned right. If a(x) < 0.5, it turned left. If a(x) = 0.5, it didn’t turn at all.

Screen Shot 2015-12-23 at 12.47.08 PM

Graphic of my Neural Network. i = input (eyes), W = weights, a = activation

After constructing this system, I began running tests. As the objective wasn’t a hard one to learn, the networks involved mastered the challenge quickly. In the gifs, you can see on the left the trial number and the fitness value (how well the network is doing) for each member of the population. p# = parent, c = child, and ^ indicates current best network.

trial 1

Trial 1, struggling.

trial 7

Trial 7, a pretty decent network is bred, but still room for improvement.

trial 25

Trial 25, all networks are doing very well.

This was a great intro to Neural Networks. I definitely want to take this idea further and create deeper, more complex networks for harder tasks.

If you want the source code, you can get it here (Java, 26kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqSVRvbXpzcm1VRVE/view?usp=sharing

2015 Winter AI Project – Maze Runner

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

After my Chess project, I wanted to become a little more serious in my research surrounding artificial intelligence. I looked up everything from examples of how it was used in day to day life to videos of lectures by college professors. I learned a lot about the concept of reinforcement learning through these lectures and even watching college Thesis projects, making me want to program something to test out the idea.

First, however, a very brief overview of my current understanding of reinforcement learning. Reinforcement learning is a system in which the AI learns about the environment it’s put in through a system of actions and states. The ai is rewarded or punished through the various states (depending on whether you want to maximize utility or minimize cost, either way is fine, but for my example I will be using a reward system) for good actions versus bad actions, but the AI doesn’t know which actions are good or bad until it experiences them. Thus, after a long time, the AI should begin to gravitate towards actions that garner it the largest sum reward.

Alright, now to my project. I decided to go with something simple and intuitive to explore the concept of reinforcement learning through creating an artificial intelligence that would find the quickest path to an exit in a predefined maze. As mentioned before, the AI started with knowing absolutely nothing about the maze it was placed in, and must learn everything from scratch.

Screen Shot 2015-12-18 at 2.09.06 PM

Original board displaying reward values for each tile (currently 0 because the AI doesn’t know which tiles are good and bad yet). The blue tile is the current position of the runner (controlled by the AI) and the green tile is the exit.

The reward system for the AI essentially takes in all of the rewards of the tiles it visited and sums it together, divides that number by the amount of tiles visited, and uses that value to update the reward value for all of the tiles it visited. The tiles takes that value and averages it into all of its previous rewards given from previous trials. The AI gets a heavy reward boost for completing the maze faster than before, and a heavy punishment if it was slower (increase/decrease overall reward by a factor of 2). Thus, over time, the AI slowly becomes better and better at solving the maze.

10

The 10th trial. You can see the general path the AI takes (tiles with the highest reward) is rather sporadic.

50

The 50th trial. The AI has begun to iron out all the kinks in the path, but it isn’t yet the most efficient.

100

The 100th trial. You can see the AI has almost found the most efficient path.

I heavily enjoyed this project, as watching something learn, especially a program, is really interesting. I’m really excited to see where I can take this in the future.

If you want to play around with this, you can get the source code here (Java, 24kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqNUVpTFFQdUNaaUk/view?usp=sharing

2015 Winter Game/AI Project – Chess

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

I had been wanting to make chess for a while, as I’ve been a fan of the sport. What better way to make it than to explore simple self learning artificial intelligence at the same time? However, before getting to the AI, I first had to make the game. The idea of generating a list of possible moves was was simple enough, but the real challenge was implementing more complex moves like castling, checks, stalemates, etc.

Screen Shot 2015-12-11 at 6.49.45 PM

Final game showing all possible moves.

After spending a while messing around with the game itself, time came to begin working on artificial intelligence. Having known nothing about the subject (I started learning about concepts like neural networks and genetic algorithms shortly after), I decided on a simple concept. The artificial intelligence would save all the games it had previously played and look back on them, determine the statistical probability that it would win if it did that move (based on the outcome of the games that had the same move), and decide whether or not to proceed from there. If it decided that the move was bad, it would instead do a random move and add that to the database.

There were a few errors in my method, however, that presented themselves as I neared the end of writing the program. Mainly, it would take ages before the AI became even decent at the game, because, no matter how fast it played the game, it would almost always play a new game every single iteration (Hardy’s estimate of possible chess games is roundabout 10^(10^(50)), more on the subject here), causing the file that contained all of the previous games it had played to become unbelievably massive before it became good at the game.

Screen Shot 2015-12-11 at 8.26.02 PM

A fraction of a single game stored in a .txt file.

Despite all short comings, this project was still a simple and a fun introduction into the world of self-learning artificial intelligence.

© 2024 Brendan McCloskey

Theme by Anders NorénUp ↑