Tagfall

2018 Fall Game Project – Cards

Fall semester of my Junior year at Virginia Tech was more a laid-back one when compared to my earlier semesters – mainly due to the fact that I was a year ahead of schedule and most, if not all, of my hard classes were behind me. I took the solid amount of free time that I had to start researching new and interesting computer science topics, while building on the skills I had learned previously either at my internship with Cox, or through the work I had done by myself. After coming up with a couple of different ideas, I settled on a basic card game, twisted in such a way as to challenge what I knew and force me to learn some new and interesting concepts.

Essentially, my goal was to create a multiplayer standard 52 card game that was able to support any rules the players decided to enforce. The idea was to create a simple backend so that I could have the liberty on focusing on the more challenging and new concepts like communicating with a central game server. I also wanted to test my abilities in making a backend in C, as it is well known that lower level languages, like C and C++, are great for games due to their relatively fast execution times. However, making all other aspects of a game in C (i.e. rendering) can be rather frustrating and time consuming, so I also decided to code the frontend in another language. As such, I settled on python/pygame. Finally, as I wanted to make the game multiplayer, I needed to include a server framework that was able to communicate with n many clients at a time, while still being able to interface with the backend C code. After doing some research, I decided on ASP.NET, as I am generally unfamiliar with C# and the .NET framework, and wanted to use the opportunity to learn something new instead of falling back on a framework I’m more comfortable with. Using C# also allowed me to interface with the backend incredibly easily (detailed later). I decided to upload the server on AWS/EC2, as I’m always looking for an excuse to learn more about the Cloud/AWS and practice using it.

The final architecture of the card game project.

Coding the C backend was the easiest part of the project – mainly due to my comfort level with the language (thanks to the many difficult low-level programming courses taught by Virginia Tech), but also due to the nature of the project. The game is fairly simple; there exists only a couple of noteworthy objects/data structures. The main deck, the cards on the table, and the cards in each of the player’s hands. All of these objects can be represented as a Deck, each containing Cards. I created the Deck data structure as a hybrid between a stack and an array list. It supports popping and pushing, as drawing from a deck usually entails popping from one deck and pushing it onto another, but the need for placing or picking cards from specific indices was also necessary, as a player picks and chooses specific cards to move onto or into their hand from the table. As of right now, the backend supports 4 basic functions – drawing from the main deck to the table, drawing from the main deck to a player’s hand, picking a card up from the table and placing it into a player’s hand, and placing a card from a player’s hand onto the table. Keeping it relatively simple, at least for the time being, allowed me to write up this portion of the project quickly and move on to the newer and more challenging parts of this project. I definitely plan on revisiting this section and fleshing it out to add more functionality later down the line.

Next came creating the server and wrapper for the backend. Like I stated before, however, this was relatively easy, as the choice for C# made interfacing the C backend easy (through DllImport) and the .NET framework allowed for a speedy standing-up of a basic RESTful web API (through MVC). Essentially, the client makes requests on various endpoints depending on the method they want to invoke. All I really had to do was create endpoints for each of the 4 functions described in the paragraph above. Other than that, I created register/unregister endpoints for when a player joins or leaves the session (mainly to ensure that the number of players didn’t exceed the maximum of 4, and to assign each player a specific player id), and basic start/terminate endpoints. After creating the C# server, all that was left was to publish it on an AWS EC2 instance, which admittedly took some time as I was unfamiliar with how to do so, but was achievable in the end after some time and googling.

Finally came creating the frontend in Python. Thankfully, interfacing the server was relatively easy due to the urllib library. As a result, I really just needed to focus on getting the correct information from the server and translating it into something that could be easily understood and rendered using pygame. I achieved this by adding an endpoint to the server that retrieved a JSON object of the entire table (including the main deck, the cards on the table, and each of the hands). The python code regularly makes checks to this endpoint and references it against a stored version. If they differ, the local versions of these decks are changed to reflect the differences. All I really needed to do at that point was render the cards and come up with an intuitive way to activate the backend functions. I took inspiration from the popular online card game Hearthstone and implemented a dragging feature, where players can click and drag cards from their hand to the table or visa-versa.

The current status of the python client, with 4 players connected to the central server. You can see the main deck represented in the middle of the screen as the stack of non-visible cards, and the on table cards represented as the stack of visible cards. Each of the hands are also present, with only the current player’s hand being visible.

I still have much I want to achieve with this project. After an initial play test with friends we discussed a couple of features that would be good to include later down the line. For instance, the ability to show cards from one’s hand without having to place it on the table, being able to discretely trade cards between players, and the inclusion of a discard pile. I’m eager to continue optimizing and working on this to see where I am able to take the final product. If you’d like to see the code for this project, you can view it on github here.

Thanks for reading! If you have any questions or comments, feel free to write them below or email me at [email protected].

 

2017 Fall Cluster Project – Raspberry Pi 3 Cluster

After using VT’s cluster for so long, I was interested to learn how the computing behind it worked, and decided to make my own cluster of Raspberry Pis. I contemplated the benefits of embarking on such a project and came up with a multitude of experiments I was interested in exploring that would benefit a cluster-like structure, of which include password decryption, simulation for genetic algorithms, and more. What really tipped me over, however, was the already written and free to use libraries MPI and MPI4PY, of which allowed me to use python to write easy to understand and easy to implement code that utilized cluster computing. Thankfully I had a lot of the components on hand already, but I estimate the overall cost to be around $270. This includes the cost of 4 Raspberry Pis, 4 8gb micro SD cards, a stand for the pis, an ethernet switch, 5 ethernet cords, a USB hub, and 4 micro USB – USB cords. There’s a great video series by Tinkernut on Youtube that I used for basic reference when configuring my Pis that I’d certainly recommend you follow along as well if you have any difficulties putting one of these together. If you have any questions about any of the components I used to build my cluster, feel free to email me at [email protected].

For my cluster, I used 4 Raspberry Pi 3 B boards, each running a distribution of CentOS 7 Linux. I decided to use CentOS 7 over a friendlier version of linux (i.e. Raspbian, a distribution written specifically for the Pi), simply because VT’s cluster also uses CentOS. Each Pi is given power and connected to an Ethernet switch that is also connected to my router so I can ssh into each of the nodes.

23846287_1514413558653154_1522017645_n.jpg
Picture of my cluster

I have the Pis set up in a master-slave system, in which the head node takes one big problem and divides it into smaller problems for the slave nodes to compute. This allows me to, say, run genetic algorithm/ANN simulations on the slave nodes while performing breeding and fitness evaluation on the master node. I definitely plan on experimenting with speedups in regards to AI in the future, so be on the lookout.

After building my cluster, I found it fitting to at least write a short piece of test code to see how much the cluster structure sped things up. I decided to time Leibniz’s Pi approximation to the 100,000th term on a single worker vs. all three workers. You can see the results of that experiment below.

Screen Shot 2017-11-22 at 9.04.51 PM
Testing 1 vs. 3 workers

You can see that the test in which I only used one worker (nodelist_small), the runtime was approximately 3 times as large as the runtime when I used 3 workers (nodelist), which is to be expected. You can also see that the estimates are slightly different, though I believe this error to be a result of integer division losing some information when divvying up the work for slave nodes (As this is just a small test, I’m leaving this error for now). You can get the code for this test here.

This was an incredibly fun and practical project to work on during my Thanksgiving break. Definitely be looking for more updates from me as I begin to experiment more with the cluster and what it can do. Hopefully I can run some experiments before my next break, but as exam season is coming up, I may need to focus more on my studies. If you have any questions or comments, feel free to email me or leave a comment.

When idle, my cluster is aiding the BOINC sponsored SETI@Home project. I chose this over other BOINC projects mainly due to it’s friendliness towards ARM processors (of which the Raspberry Pi uses).

2017 Fall Data Analytics Competition – VT/GDMS Health in the US

Moving into the fall semester of my Sophomore year at Virginia Tech, I was a little starved of side projects I could do, simply due to the amount of schoolwork I had to finish each week. However, luckily I was able to make time for a couple projects I could focus on in the side. One of them being is data analytics competition, of which was sponsored by the CMDA club here at VT and GDMS. I participated in the beginners competition in a group of 3, the other members being good friends of mine Eric Fu and Kali Liang. In the competition, we were given a large data set (.csv file) involving health issues across various cities throughout the United States (500 entries). Some examples of the categories, just to get a feel of the type of data we received, included ailments such as asthma, obesity, heart disease, kidney disease, etc. We were also given general data involving disease prevention (i.e. access to health care, going to regular checkups, etc.) and unhealthy behaviors (i.e. binge drinking, drug use, lack of leisure time and sleep, etc.).

I believe that my group’s approach to the data was a rather unique one, given our different backgrounds. Eric is a BIT major, Kali is a CMDA (Computational Modeling and Data Analytics) and Statistics major, and I am, of course, a Computer Science major. As a result of our different backgrounds, we each had different ideas as to how to approach the information to find correlations between the categories. For instance, my approach was to code basic programs in Python using numpy and matplotlib to generate scatterplots and other useful graphs, while Kali was comfortable using R to find more statistics-heavy information (like GLM/LS Means) and Eric was comfortable working directly in the given Excel file to look for correlations.

First, we decided as a group to look for interesting differences in the data by splitting the it up into regions. We explored a number of splits, and we ended up using data from a regional split (NE, S, MW, W), a coastal split (west/east coast, with everything else being lumped into a “main land” group), and a split based on population size to explore the uniqueness of big cities (we defined a big city as any city above 2 standard deviations above the mean population for the entire data set). We also explored a split based on State, but found the data to be too volatile, and with 51 groups (incl. Washington DC), there were still too many points to investigate. My python scripts came into good use here, as I was able to make a quick program that found and graphed the scatterplots of each region for any given category, along with a line that connected the means for each sub group together (with and without outliers) for visual investigation. I also coded a script that plotted scatter plots for any 2 given categories to see if they were related.

OBESITY.pngOBESITY.png

The first graph details the difference in Obesity in the east vs. west coast, while the second illustrates the difference in Obesity in the US Regions.

From this initial stab at the data, we found what we believed to be significant differences between the general health in the east coast and the west coast (meaning the west coast was significantly lower than the east in almost all categories). Also worth noting, we also determined that health in the west region was generally better than the other regions. From this lead, we explored the differences in the means to attempt to understand why the west is generally healthier. We observed that both the west coast and the west region have significantly lower Obesity rates when compared to other regions, and decided to explore that as a possibility (see the mean comparison charts above). We confirmed our assumption by looking at the general trends for obesity and various other ailments and observed that, in general, there is a strong link between obesity and other serious health problems.

figure_1.pngfigure_1.png

Some of the scatterplots we used to illustrate the link between Obesity and other serious illnesses like heart disease (CHD) and asthma (CASTHMA). 

In order to further provide evidence for our assumptions, Kali used regression analysis, specifically generalized linear models with least square means to further look for significance between these groups. Using these tests was helpful because it not only aided at eliminating some of the confounding variables that may be lurking within the data, but made it extremely clear as to whether the data was significant or not through looking at the p-value numerically and with a visual aide. Essentially, when graphed, intervals of the mean are plotted for each region (each interval determined using a 95% CI). If the regions don’t have any overlapping points, then the difference between the two can be considered significant and is definitely worth looking into. As you can see below, we were able to confirm our assumption that the difference between obesity rates in the east and west is significant.

Picture1.png

LSM Chart confirming our assumption that the difference in Obesity rates in the east and west coasts is significant

However, we were not satisfied simply stating that we recommend that Obesity rates be lowered, so we looked into possible variables that could affect Obesity that weren’t already included in our data set. Ideas for this included finding density of fast food restaurants within each city, and measuring the nutritional differences between each city. Unfortunately, we were unable to find any reliable data on density of fast food restaurants, but we were able to pull data from the CDC website pertaining to amount of people that ate less than one fruit and one vegetable per state in the US. Using this data, we first confirmed the link between not eating nutritional food and obesity, then went on to make our recommendation. Eric’s background in Business came in handy, as we gave various recommendations that made sense from an economics point of view (i.e. decrease the tax or cost of fruits and vegetables in obese regions, subsidize farmers to increase fruit and vegetable output, or push advertisements encouraging people to eat healthier).

As a slight aside to our main focus on Obesity across the US, we also found that Stress (we defined stress as a lack of leisure time (LPA) and a lack of sleep) was high in big cities. We noticed that not only were kidney problems slightly higher in bigger cities, but so were other ailments such as strokes and diabetes. We were able to link the connection between stress and diabetes with kidney problems (seen below), and further connect kidney problems with strokes. Unfortunately, due to the time constraints for the competition, we were not able to explore these connections much deeper. However, given more time, we definitely would have done so.

SLEEP.pngfigure_1.png

Mean comparison showing increase in lack of sleep in big cities and scatterplot showing the relationship between sleep deprivation and kidney problems.

This competition was extremely fun to participate in. Unfortunately, we did not place in the top 3 teams for our section, although we did get an honorable mention. I want to thank GDMS and the CMDA club at VT for making this a reality. I’ve always been interested in data analytics due to my background in neural networks, and it was incredibly fun to work on something data analytics related even though I haven’t been able to take any statistics courses as of yet.

If you’d like to see any more plots, or would like to see the scripts I made for the purpose of this competition, feel free to email me at [email protected].

© 2024 Brendan McCloskey

Theme by Anders NorénUp ↑