Saturday, March 16, 2013

Blog<Weight Loss>

I promised this like 2 months ago, but some stuff came up that is taking up most of my weekend time (which I promise to blog about at a TBD date). So, as I mentioned in this post, I am trying to lose weight again. And I've got some other goals as well, mostly so that I have more motivation than just staring at the scale every week, and (more importantly) because who the fuck cares how much a person weighs? I want to look and feel good.

Anyway, here's what I've been up to for the past 2 and a half months. You can read some of my previous posts on the subject if you are curious about the reason for the stuff I'm doing. Looking back, I still agree with almost all of it. Now is probably a good time to mention that I am not a certified nutritionist, or personal trainer, or medical doctor, or workout robot, or whatever. So please consult your physician before attempting anything you read on the internet. Or don't, but just remember that you're doing it at your own risk.

I calculated my BMR at around 2200 calories. I try to eat that much every day. This will change as I lose more weight. I used the myFitnessPal app for the iPhone to track my meals for about 4 weeks until I had a good understanding of what I was eating every day. Now it mostly goes unused, but it was a great tool for that first month.

There's a funny thing about counting your calories and macro nutrients (Protein, Fat, Carbs). In order to stay under your magic calorie count for the day and still feel satisfied after a meal, you have to eat healthier food. The crap you get at McDonalds or Taco Bell is high in calories and low in satiety. So, in order to feel like I was eating enough, I switched to lean meat, salad, homemade sandwiches, broth-based soups, yogurt, and basically anything that you can find on the outer walls of the grocery story. This is where they put the healthy stuff. They put the processed stuff in the middle because it doesn't go bad and they don't have to switch it out as much. At this point, I can basically eat anything in my fridge or pantry without feeling bad about it because it's all pretty healthy (I am still conscious of the proper portion size though).

I love to grill, so I'll cook up 3-4 chicken breasts at a time and eat it for the next few days. Anyway, I feel like I get to eat until I'm full and that's about the perfect amount of calories. I have a pretty boring diet, but that's fine with me.

I also cut out sodas and I only occasionally drink caffeine (twice a week). This has forced me to drink more water, which has also forced me to get up and walk around more at work, either to refill my water bottle or to use the restroom.

Weight Training
Every weight loss plan needs to incorporate resistance training of some sort. It reduces the amount of muscle loss that comes with being on a calorie deficit You don't want to lose muscle because it's what burns fat, and by doing resistance training, you're telling your body "Hey, we need these things. This isn't a fuel source." Pretty simple.

As before, I follow (a modified version of) the Stronglifts 5x5 plan, which is 3 workouts a week. Squats every workout, and alternating bench press/barbell row, and overhead press/dead lift each workout. My modification is related to my running goals. Basically, I was following the SL5x5 program at the recommended pace, which is to add 5 lbs to each exercise every workout. The problem, is that I run between the weight days so I don't get the full 48 hours of recovery time. This lead to patellar tendon pain at around 75 lbs on the squat and back pain at around 100 lbs (with the patellar tendon pain just getting worse each increase.)

I'm just doing the weight training to accelerate the weight loss, the running is my real goal, so I decided to make a change. I first dropped the weight for all the lifts and I completely dropped all the weight for squats (just the 45 lb bar). Then I decided to only increase the weight by 5 lbs every other workout for each exercise. Basically, I'm increasing at half the recommended speed. It seems to be working well so far. I don't have any back pain anymore (improving my squat and dead lift technique is helping here), and for the past week my patellar tendon has been pain-free as well, which is huge because I was afraid I'd have to stop the running entirely to let it heal.

I added bicep curls because I'm vain (and because a bigger bicep keeps my iPhone arm band in place when I'm running). I increase the weight by 2.5 lbs each week, but I'll be switching over to pullups as soon as I can do more than 3 in a row. :)

I also decided to start doing a pre-workout/warm up consisting of planks, side planks, back crunches (I don't know what these are really called), and stretching/warm up type stuff. I'm now at 1 minute on planks, 45 seconds on each side for side planks, and 20 back crunches, which I do mostly just to warm up my back muscles. I do all this 3 times in about 8 minutes, and I stretch during the time that I'm not doing planks. The planks have helped immensely with my core strength. I started at 45 seconds normal planks and 30 seconds side planks, and I increase everything by 5 seconds each week. When I'm done with all this (takes about 20-25 minutes, I head to the garage for the barbell lifts, completely warmed up.

Overall, I feel a lot stronger. My muscles are more defined. And I no longer cringe when looking at myself in the mirror. I'll keep increasing the weight until I plateau either from the calorie deficit or from pain or injury. Hopefully the former.

The inspiration for today's post. I ran a personal best 6.75 miles this morning. It was slow as hell, but it's still better than I've ever done. I did 5 miles barefoot, and the rest in my Luna sandals.

I outlined some of my running goals in an earlier post, but I basically wanted to be able to run a 10k in mid-April. So my training plan has been as follows. Twice during the week, I try to run about 3-4 miles, focusing on not wearing out the soles of my feet, and lately, focusing on increased pace. On Saturday, I focus on increasing my distance (regardless of pace). My initial plan was to increase by .25 miles each week, but I've actually been increasing somewhere around .5 miles.

It took a long time (I was at around 3 miles max distance at the beginning of the year), but I'm ecstatic with the results so far. I've already surpassed the distance needed for the 10k, which was a big mental barrier to break through. Now, I feel like with training (and hopefully about 20-30 additional lbs of weight loss), I might actually be able to do a half marathon at the end of this year or early next year.

One big thing that today's run taught me is that I should probably start carrying water with me. I'm thinking about getting a camelbak or some kind of belt, because I already feel like I carry too much crap in my hands (my sandals). My thought is to just strap it to my body somewhere and not worry about it until I'm thirsty. There might even be a pocket or something to store my sandals while I'm barefoot.

Putting it all Together
So, here's my schedule in condensed form:
  • Monday - Rest Day (And sometimes a cheat day on the diet. Nobody's perfect.)
  • Tuesday - Run 3-4 miles. Focus on pace and technique.
  • Wednesday - Lift. Plank workout and barbell exercises.
  • Thursday - Run 3-4 miles. Focus on pace and technique.
  • Friday - Lift. Plank workout and barbell exercises.
  • Saturday - Long run. Increase distance .25 - .5 miles. 
  • Sunday - Lift. Plank workout and barbell exercises. 
Eat 2200 calories a day. I try to eat as much protein as I can and the rest I split between carbs and fat, but I'm not fanatical about it.

Results so Far
It's been about 2.5 months since I started the lifting and diet, and I've lost almost 20 lbs. This was my weight loss goal for the 10k, and I've almost achieved it (a month early.) I'll keep chugging along and occasionally adjust stuff, but I'm pretty happy with the results so far. 

If it ain't broke, don't fix it. 

Talk to you next time.

Sunday, January 20, 2013

Blog<Running> - Electric Run

So, at the end of last year, I started looking for some fun 5k events for the new year. I wanted to give myself motivation to keep running through the cold, dark, wintry months, in preparation for the 10k that I'm doing in April.

One of those events was the Electric Run. I heard it described as equal parts running, Mardi Gras, and electronic dance party. After doing it last night, I'd say that's a pretty good description. (It was pretty family friendly though. There was alcohol at the finish line, but no bead/boob exchanges.) 

The Costume:
One of the cool parts of the event is that all the participants are encouraged to cover themselves with lights, and glowing things, and florescent clothing, and whatever else you can think of. The race started at 7PM and the darkness really makes all this stuff stand out. When I registered, I bought some glowing stuff that was being sold by the event organizers, and I figured I'd come up with some way to wear it all before the race. I got a package of "LED shoelaces", and a 3 yard light up wire... thing.

On race day, I opened my package of shoelaces and found 2 things.
  1. There was no obvious way to secure the "laces" to my bare feet. They were too short to tie around my feet, and the battery/switch part was a little too bulky (and I was afraid they would come undone and fly off my feet or something.)
  2. One of them didn't work.
Fortunately, I am somewhat experienced with electronics (I have a degree in an EE related field after all), so I disassembled the broken one and found the problem. One of the battery connectors had disconnected from the circuit board. Once I soldered the piece back on, I realized I could make some modifications that would allow me to wear it as some kind of electric finger... things. Long story short, I transformed it from this:

Works good for shoes I guess.

To this:
I've got the electric touch.
It turned out way better than I thought it would, and I think it looks pretty badass in the dark. When I got to the race, I taped the tubes to my fingers so that it looked like I had some kind of electric claws. If I had to do it again, I'd get some cheap black gloves and attach everything to them. After the run, and 2.5 hours of dancing, the tape stopped working so well.

The Race:
I got to the Cotton Bowl about an hour before the race kicked off, and I attached all my glowy things.  I made my way to the starting area and stopped for a pit stop at a row of port-a-potties. I think about half of the 6000 runners had the same idea because it took about 20 minutes to get to the front of the line.

After doing my business, I made my way to the starting line. A sea of thousands of glowing, flashing, dancing people. They split people into groups of about 1000, so I made my way into the middle of group 2. I had a few conversations about barefoot running as the people around me looked down.

At 7:05, it was group 2's turn to start. The crowd counted down from 10, and we were off! We all started off slowly. There were 1000 people of all different running abilities, and the course narrowed not to far after the starting line, so it was claustrophobic for about a third of a mile. Fortunately, no one stepped on my feet. :)

The run itself was great. The first corner had colorful umbrellas hanging from trees above and around the course. There were floating balls in the water along the running path. Everything was lit up, and there was some loud-ass EDM playing. After the first mile, we headed out into the parking lot and through some access roads around the state fair area.

Unfortunately, the second mile wasn't punctuated with as many flashing lights, glowing stuff, or loud music. The course also changed from smooth pavement to a rougher chip-seal-esque surface, where I really had to pay attention to my form. Occasionally there was some dirt and even some gravel to run through. Needless to say, mile 2 wasn't my favorite part of the night. It was split up by a water station halfway through the 3.1 mile course.

The last third had some more light/musical stuff. Spotlights projecting moving images, big screens playing video, more of those hanging umbrellas, big inflatable archways with lights that were synchronized to the music. I started to pass the people who went out too fast at the beginning. I heard a lot of "whoa, that dude's barefoot" around this point. Unfortunately, the ground never got any better. It switched from rough concrete to that mixture of pebbles and cement that you sometimes see in areas that are designed for shod pedestrians. I crossed the finish line to cheers from the volunteers and made my way to the stage.

The Dance Party
After getting some water, I put my sandals on and made my way to the stage where there was a DJ playing more EDM. I danced for a bit and then remembered to check my phone. 7:55. I can only guess that I finished the course in about 35 minutes, but I have no way of knowing (there was no timing system.) I walked over to the tents, bought a $6 miller light and walked around for a bit. I ended the night by dancing in front of the stage for the next 2 hours.

I talked to a lot of people about running barefoot. It's a pretty good conversation starter for someone who isn't great at starting them. :) I got a lot of positive reactions, a few concerns about glass, and a ton of "whoa, that's badass."

I also shot some crappy video on my phone. I apologize in advance for the shaky-cam footage. It's hard to keep your phone stable while holding it with taped fingers and running through a crowd of people.

Saturday, January 5, 2013


Jan 5, 2013

Hello, internet. I've decided to dredge this thing back up again. Long story short: I want to lose some weight that has crept back on over the past 2 years. A period, that had its good moments, but is mostly one that I want to forget.

So, my plan is to start keeping track of my fitness, my diet, and my weight loss (and whatever else I decide to track) in the hopes that the Internet Shame ™ will kick me into action again.

Part of it will be a meal and workout tracker and part of it will be goal tracking with (hopefully) weekly updates. Here are the 3 things I want to change the most, along with some goals for achieving the change:

I want to be able to run further, and I want it to be barefoot. I actually had a pretty good year last year.I did my first ever 5K with my mom and cousin, followed by another 5K a few weeks later, and 6K (billed as 5K, but was at least a half mile more) trail run to end the year. All of them barefoot (and all of them with my mom). I have a 10K scheduled in April (also with Mom), and I want to be able to run the whole thing barefoot. That's basically my goal. Barefoot 10K in April. I'll create a new goal after that point.

Weight Loss
I want to be down to 200 lbs. I had been hovering around 230 all of last year until the holiday season hit. I just weighed myself and I'm at 238. It's depressing after looking back over this blog and seeing what I went through 2 years ago to get down under 200 lbs. Lots of stuff contributed to the lapse, and I might get into some of it in this blog, but long story short is that I want to be in better physical shape. Partly because it will help my running goal, and partly because I want to look good in the mirror. My short term goal is to be at or under 220 by my April 10K. My goal by the end of July is to be 200 lbs. I'll reevaluate after that.

Less dependent on caffeine/sodas. I've been drinking about 2 cups of coffee a day and at least 1 soda, sometimes 2 or 3, for the past year. My goal for the duration of my weight loss is no soda (at all) and coffee only on the weekends. I like the way coffee tastes, but I don't want the dependency on the caffeine. This goal is mostly to prove that I have some self-control, and because I know that cutting out the extra calories will help me to reach my weight loss goal.

So, there it is. Starting 2013 with 3 attainable goals. My next post will outline my strategy for meeting the weight loss goal, as well as my initial food/workout entry. 

And then I'll probably cry in a corner because what the fuck have I been doing for 2 years?


Friday, May 27, 2011

Blog<Programming> Neural Networks - Part 3

Disclaimer: I'll be taking a departure from my usual weight loss/running posts this month to talk about my latest Scala project. So those of you who are not interested in programming or neural networks can feel free to tune out.

My first post on neural networks and genetic algorithms explained the basics. The second one showed some code and talked a little about the implementation of the XOR function. For my final post, I've created a simple game and trained a neural network to play it.

The Game
The game is simple. It consists of a square area that is populated with targets in random locations, and the player. The objective is to move the player such that it "picks up" (touches) some targets while avoiding others. Picking up a good target awards the player 3 points, while touching a bad target subtracts 5 points. The player is judged on how many points he can accumulate after 100 turns.
The above image shows what the board looks like. The black circle is the player. The green circles are the "good" targets (+3 points) and the red circles are the bad ones (-5 points). The light green diamonds are the good targets that have been picked up and the orange stars are the bad targets that have been touched.

The AI
The AI for the player consists of a neural network trained by a genetic algorithm. The network itself has a number of inputs including player position, the player's last move, the closest 4 good targets (the dark green circles), and the closest 4 bad targets (dark red circles). The hidden layer consists of 2 layers of about 20 and 5 neurons respectively. The output layer is 2 neurons. One for the horizontal movement and one for the vertical movement.

The game itself constrains how far the player can move on a turn, which means that the output neurons mostly just give a direction vector and not an actual offset position. However, if the magnitude of this direction vector is less than the maximum turn distance, the player will use that for its actual distance. This allows the player to potentially make fine-tuned moves.

The training algorithm ran for 1000 generations with a population size of about 500. The training data was a set of 4 randomly generated boards and 1 static board. The fitness of each individual is basically the score for each board plus a small factor based on how quickly the player accumulates his score. This selects primarily for highest score, but secondarily for speed at which and individual can find the good targets.

The Results
The algorithm worked well. Here's a sample of the best individuals at the end of several generations:

This is basically the best individual out of a set of 500 randomly generated networks. As you can see it does a pretty good job of avoiding the bad targets, but it gets stuck on the left side pretty quickly, not knowing where to go.

By the 20th generation, the best individual is a little better at picking up the good targets. But towards the end, it gets a little confused, oscillating back and forth as it sees different targets.

Generation 100 has stopped caring about the bad targets so much. It looks like it's preferring to move in a single direction for a number of turns before making a change. This probably has to do with the fitness function's secondary selector which is based on the speed at which the score is accumulated.

Here are links to generations 200 and 500. You can see the player getting better at quickly picking up the good targets.

By generation 1000 the player is almost able to get all of the good targets in the 100 turns. It is also reasonably good at avoiding the bad targets although there are some odd moments where it seems like it's deliberately picking them up.

Lessons Learned
You've probably noticed that the neural network only moves the player diagonally. This is largely because of the activation function that limits the output between -1.0 and 1.0. Meaning that excessively large numbers are around 1 while excessively low numbers are around -1. Comparatively, the -1 to 1 range is a small target to hit. This means that the <-1,-1> <-1,1> <1,-1> and <1,1> moves are somewhat selected for because they are the easiest for the network to attain. If I were to do it again, I'd probably drop the activation function entirely and just use the output neurons as a direction vector.

You also probably noticed that there is a giant leap in ability from the first generation to the 20th and 100th generations, but only a smaller leap to the 500th and 1000th generations. This is because most of the improvement in a genetic algorithm happens quickly with only small refinements in later generations. I actually had to tweak the size of mutations so that smaller increments were possible as the generations increased.

Finally, the entire training period took about 6 hours on my quad-core desktop PC. You might think that's a long time, but just think about how long it might take to actually implement the decision logic in code. I was able to do something else for those 6 hours while my PC worked tirelessly toward a solution. The brain power in a neural network and genetic algorithm is mapping the inputs and outputs to the problem at hand, and figuring out how to determine fitness of an individual. But once you have those things, the solution is found automatically by the algorithm. The results might not be a perfect but you can get pretty good results.

Next Steps
I noticed a shortcoming of neural networks almost immediately. The outputs are a series of additions and multiplications. You can't divide, you can't take a square root, you can't loop over a dynamic-length list, and with an acyclic network, you can't story anything resembling a memory. You can get fairly good approximations for a lot of problems, but it can still be fairly limiting. My next area of research is something called Genetic Programming. It's the same process of evolving a population over a number of generations, but instead of the "chromosome" representing a neural network, it is the source code itself. It is an actual runnable program that changes its structure and execution to  improve on the original. And since it is just a program, it is not limited by the additions and multiplications that comprise a neural network.

And that's all for now. We'll be returning you to your regularly scheduled fitness talk next time. Thanks for bearing with me as I strayed from the path a little bit.

Sunday, May 8, 2011

Blog<Programming> Neural Networks - Part 2

Disclaimer: I'll be taking a departure from my usual weight loss/running posts this month to talk about my latest Scala project. So those of you who are not interested in programming or neural networks can feel free to tune out.


Last time, I talked about what neural networks are and what they might be used for. I also talked about a couple of the methods used to train them. If you haven't read it, I recommend you start there, because part 2 will focus heavily on the implementation.


I've been playing around with Scala for about 2 years now. I've got a couple of unfinished projects that I've been using to learn my way around. It seemed like a natural fit, and it's more fun than Java, so that's what I went with.

I started by separating the project into 2 core concepts. The first was the neural network implementation. Basically, the collection of input neurons, the hidden layer, the output neurons, and the connections between neurons of sequential layers. The implementation of a neural network is actually pretty simple. It's basically a series of multiplications and sums. I tried to keep it simple so that it focuses solely on the calculation.

The second was the mechanism by which the neural network learns. For this, I implemented both backpropagation and a genetic algorithm. However, I soon realized that the genetic algorithm could be used for other purposes besides just training a neural network. So I pulled that out into its own module.

The final structure consists of 3 parts:
  • The neural network.
  • The genetic algorithm.
  • The neural network learning mechanism

The learning mechanism can further be organized into:
  • Backpropagation learning
  • Genetic algorithm learning

Most everything is implemented as Scala traits, making it easy to mix and match different implementations.

The Code

So, that's the boring stuff, let's see some code (to see more, you can check out the repository on github.) The XOR function is used in just about every example out there on the web when looking for problems to solve with a neural network. It's well-understood, simple, and non-linear. And it's also easy to show example code in a blog. :)

This first example shows how to set up a neural network that learns how to calculate the XOR function via backpropagation:

object XORBackProp {
def main(args : Array[String]) : Unit = {
val inputKeys = Seq("1","2");
val outputKeys = Seq("out");
val hiddenLayers = Seq(4,2)

val testData = IndexedSeq(
(Map("1" -> 1.0, "2" -> 0.0),Map("out" -> 1.0)),
(Map("1" -> 0.0, "2" -> 0.0),Map("out" -> 0.0)),
(Map("1" -> 0.0, "2" -> 1.0),Map("out" -> 1.0)),
(Map("1" -> 1.0, "2" -> 1.0),Map("out" -> 0.0)))

val network = new Perceptron(inputKeys,outputKeys,hiddenLayers)
with BackPropagation[String] with StringKey;

//Initialize weights to random values
network.setWeights(for(i <- 0 until network.weightsLength) yield {3 * math.random - 1})

var error = 0.0
var i = 0
var learnRate = .3
val iterations = 10000
while(i == 0 || (error >= 0.0001 && i < iterations) ){
error = 0

var dataSet = if(i % 2 == 0) testData else testData.reverse
for(data <- testData){
val actual = network.calculate(data._1)("out")
error += math.abs(data._2("out") - actual)
network.train(data._2, learnRate)
if(i % 100 == 0){
println(i+" error -> "+error
+" - weights -> " + network.getWeights
+" - biases -> " + network.getBiases);


println("\nFinished at: "+i)

for(data <- testData){
println(data._1.toString+" -> "+network.calculate(data._1)("out"))

The key is this line:

val network = new Perceptron(inputKeys,outputKeys,hiddenLayers)
with BackPropagation[String]

It sets up a simple neural network (Perceptron) with the inputs, outputs, and the number of neurons in each hidden layer. The BackPropagation trait gives it the ability to learn using backpropagation.

The rest of the network initialization is configuration for the backpropagation. "learnRate" determines the amount to change the weights based on the error from the test data. Finally, at the end, we are printing the results of the neural network when run against the test inputs:

Map(1 -> 1.0, 2 -> 0.0) -> 0.9999571451716337
Map(1 -> 0.0, 2 -> 0.0) -> 4.248112596677567E-5
Map(1 -> 0.0, 2 -> 1.0) -> 1.0000125509003892
Map(1 -> 1.0, 2 -> 1.0) -> 1.0286171998885596E-7

And here's a graph showing the error versus backpropagation iterations for a few different executions.

Notice that run 2 never reached an acceptable error. This is because of the local minimus problem with backpropagation. Fortunately, each of these runs took about a second, so it's relatively easy to just reset the training until you get an acceptably close solution. This may not be the case for every problem however.

This second example shows how to set up an XOR neural network that learns via a genetic algorithm:

object XORGeneticAlgorithm2 {
def main(args : Array[String]) : Unit = {
val xorTestData = IndexedSeq(
(Map("1" -> 1.0, "2" -> 0.0),Map("out" -> 1.0)),
(Map("1" -> 0.0, "2" -> 0.0),Map("out" -> 0.0)),
(Map("1" -> 0.0, "2" -> 1.0),Map("out" -> 1.0)),
(Map("1" -> 1.0, "2" -> 1.0),Map("out" -> 0.0)))

val popSize = 1000 //The number of individuals in a generation
val maxGen = 100 //number of generations

//Anonymous type that extends from PagedGANN which is an implentation of
//GANN (Genetic Algorithm Neural Network)
val gann = new PagedGANN[WeightBiasGeneticCode,String,Perceptron[String]]()
with ErrorBasedTesting[WeightBiasGeneticCode,String,Perceptron[String]]
with GAPerceptron[WeightBiasGeneticCode,String]{

override def getTestData = xorTestData
override val inputKeys = Seq("1","2");
override val outputKeys = Seq("out");
override val hiddenLayers = Seq(6,3);

override val populationSize = popSize

override def mutationRate:Double = { 0.25 }
override def mutationSize:Double = {0.025 + 2.0 * (math.max(0.0,(50.0 - getGeneration)/1000.0)) }
override def crossoverRate:Double = { 0.9 }
override def elitistPercentile = {0.02}
override def minNeuronOutput:Double = -0.1
override def maxNeuronOutput:Double = 1.1

override def concurrentPages = 4

override def setupNetworkForIndividual(network:Perceptron[String],individual:WeightBiasGeneticCode){

override def stopCondition():Boolean = {
val gen = getGeneration
val topFive = getPopulation(0,5)
val bottomFive = getPopulation(populationSize - 5)
println(gen+" -> "" -> "
(gen >= maxGen || topFive.head._2 >= 1000000)

override def generateHiddenNeuronKey(layer:Int,index:Int):String = {

//Genetic Code is 2 chromosomes (1 for weights, 1 for biases)
val initialPop = for(i <- 0 until popSize) yield {
val wChrom = new ChromosomeDouble((0 until gann.weightsLength).map(i => 20.0 * math.random - 10.0).toIndexedSeq)
val bChrom = new ChromosomeDouble((0 until gann.biasesLength).map(i => 2.0 * math.random - 1.0).toIndexedSeq)
(new WeightBiasGeneticCode(wChrom,bChrom),0.0)

//Setup the genetic algorithm's initial population

//Train the network
val network = gann.trainNetwork()

//Print the result
for(data <- xorTestData){
println(data._1.toString+" -> "+network.calculate(data._1)("out"))

Once again, the important part is the anonymous type that extends from PagedGANN. This is an extension of GANN, which is the marriage between the genetic algorithm and the neural network. The PagedGANN can take advantage of machines with multiple processors to calculate fitness and create each new generation.

The various overridden defs tweak the genetic algorithm slightly. For instance, mutationRate determines how frequently an individual might be mutated while creating a new generation. Likewise, mutationAmount determines the maximum change of a network weight if it is mutated. Here's what gets printed at the end of the learning phase:

Map(1 -> 1.0, 2 -> 0.0) -> 1.0014781670169086
Map(1 -> 0.0, 2 -> 0.0) -> 2.588504471438824E-5
Map(1 -> 0.0, 2 -> 1.0) -> 0.9994488547053212
Map(1 -> 1.0, 2 -> 1.0) -> -2.3519634709978643E-5

And here's a graph showing the error versus generations for a few different executions.

For the most part, they all reach an acceptable aproximation of the XOR function. Some take longer than others and some reach a better solution, but the random nature of a genetic algorithm can help to avoid the local minimus problem. It should be noted that it's possible that a genetic algorithm might not solve the problem at all, and that it can take some tweaking of the parameters to get it to produce a good solution.

Next Time

The XOR function is well and good for explaining the basics, but it's also boring as hell. I've cooked up a better example of a neural network in action, which I'll be posting very soon.

Stay tuned!

Thursday, March 31, 2011

Blog<Programming> Neural Networks - Part 1

Disclaimer: I'll be taking a departure from my usual weight loss/running posts this month to talk about my latest Scala project. So those of you who are not interested in programming or neural networks can feel free to tune out.

Neural Networks

Last year, I participated in a Google-sponsored AI contest. The point of the contest was to create a computer-controlled player in a competitive game based on Galcon (there's lots more info at the link I posted). The server played semi-random games against all the entries and ranked them using the Elo ranking system (used in chess). I ended up coming in 43rd in the fierce but friendly competition. It was an awesome experience, and I will definitely be competing again this year if I have the time.

One of the things I learned from the contest was that I knew absolutely nothing about artificial intelligence. Instead of teaching a program how to play the game, I basically studied it for strategies and implemented the logic directly in the program. From a few contestants' post-mordems, I would bet that most of the contestants did something similar. There were a few exceptions (such as the winner), and another entry based on a genetic algorithm (which I believe finished around 200th).

Coming away from the contest, I took a shallow dive into some AI programming techniques, and came away with a desire to learn more about neural networks. Mostly because they are easy to understand and implement, but also because they can be used to solve some interesting and difficult problems. They are useful is because they can learn to solve a problem based on inputs with known solutions. Then, for inputs without a known solution, they can make predictions based on the previously learned behavior.

So what is a neural network?
Neural networks, as you might imply from the name, are based on the cellular structure of the brain and nervous system. A neuron is essentially a cell that acts as an on-off switch. It has a number of incoming connections called dendrites that carry an electric signal. Depending on the signals of all the dendrites, the neuron may or may not send a signal itself (turn on or off). The output of the neuron then connects to other dendrites, which will cause other neurons to turn on or off. Thus forming a network structure. That's the simplified view anyway.

In computer science, this structure is called an artificial neural network (ANN), but for the purposes of this article, when I say "neural network," I'm referring to the artificial (and not the biological) version.

In a neural network, the most basic component is the Neuron. It traditionally has a number of inputs, a weight for each input, and an activation function to determine when the neuron should output a value.

In the above diagram, the output is calculated by first multiplying each input by its corresponding weight, and then summing these weighted values over all the inputs. This sum is then passed into the activation function f(x). A higher weight value gives an input a greater importance for determining the output. Conversely a lower weight means less importance.

The activation function basically takes a value and outputs another value. Generally it is used to normalize the summed input value to a range of (-1,1) or maybe (0,1). Sometimes this is a step function, meaning that if the input value is above a certain threshold, the output will be 1, otherwise it will be zero. Most neural networks use the sigmoid function because it is continuous and normalizes any input to the range (-1,1). So, if the sum of the weighted inputs is something like 20, the activation function will output ~1. If the sum is between -1 and 1, the output will be close to the original value.

A network is created by connecting the output of several neurons to the inputs of other neurons.
As you can see, the outputs of the neurons on the left (I1, I2, and I3) are fed into the inputs of the neurons on the right (O1 and O2). The final result is the output of the 2 neurons on the right.

This type of network is called a feedforward network because there are no circular loops connecting any of the neurons. There are a lot of variations to how you connect the neurons to form a network, however the most common addition is something called a hidden layer. This is basically a set of neurons that sits between the input and output. The hidden layer provides a degree of abstraction and additional weights that can aid in the learning process.

A neural network's purpose is to solve a problem. If you just create a random network of neurons and input weights, you won't get very good results. However, there are a number of techniques for teaching a network to provide a better solution to the problem. A network "learns" by computing a result for a given input, determining how close the result is to the desired answer, and then adjusting its weights to hopefully give a better result the next time it is computed. This process of calculate -> adjust weights -> calculate, is performed many times until the desired result is achieved.

This is a fancy name that really just means determining the error of the result, and then working backward to reduce the error at every neuron in the network. In practice it's a somewhat-complicated process, involving not-so-fun math (for the lay-person anyway; I'm sure mathematicians get a kick out of it.) Fortunately backpropagation was figured out a long time ago, so all of us hobbiest computer scientists have some conveniently tall giants to stand on.

Backpropagation is useful when you have a training set with known output values. For instance, the exclusive-or (XOR) function has 2 inputs and one output. Whenever one input has a value of 1 and the other is 0, the output is 1. Conversely, if the inputs are both 1 or both 0, the output is 0. Since we know the desired output values, we can easily determine a network's error by subtracting the generated result from the known output.

Backpropagation can be a useful way to teach a neural network, but it is limited by a few issues. The first is that it will sometimes train the network to provide a solution that is not valid (so-called local minima). The second is that it cannot teach networks if the optimal solution is not easily known when giving it test/training data.

Take the Galcon game as an example of the second problem. On any given turn, you have a list of planets, a list of enemy planets, a list of neutral planets, incoming fleets, planet growth rates, distances, etc... Every decision you make on this turn will affect future turns' decisions. There are so many variables that it would be nearly impossible to determine the perfect action to take for every situation. In problems like this a good way to train a neural network is to directly compare it against another network. This is the idea behind the second way to teach a network, genetic algorithms.

Genetic Algorithms
Genetic algorithms are based in the natural evolutionary processes of selection (based on fitness), mating, crossover, and mutation. The basic process is to create a population of genetic sequences (chromosomes) that correspond to parts of the problem's solution. This population is then used to create a new generation which is tested for fitness (how well it solves the problem). Over a number of generations, the top individuals will be able provide very good solutions to the problem.

In the case of a neural network, a chromosome can be represented as the network's collection of input weights. Each chromosome (list of weights) is tested and assigned a fitness value. After all chromosomes are tested, they are sorted so that the ones with the highest fitness values are first. To create a new generation, these chromosomes are selected as "mates", with a preference given to the ones that appear highest in the list (the ones with the highest fitness). The two mates combine their chromosomes in a process called crossover to produce a "child" chromosome. The resulting "child" may be further changed by mutation.

By the end of a number of generations, the top chromosomes should provide good solutions for the network.

This type of learning also overcomes the problems with backpropagation, but it requires a much longer time-frame to complete.

And that's the overview of what neural networks are. This post was pretty light on details, so if you are interested in creating your own, here are some more resources that I found useful:

In the next part, I'll provide some Scala source code for my implementation, as well as some examples of both backpropagation and genetic algorithm learning.

Sunday, February 27, 2011

Blog<Resistance Training> Coming Around

I've mentioned a couple times that resistance training is a necessary part of weight loss, but that I only do it begrudgingly. Well, I haven't totally changed yet, but I'm starting to come around. First, a little back story.

The History
My first experience with weight training was in High School. As a part of our off-season training for the swim team, our coach had us lifting weights 3 days a week. We did a fairly standard set of exercises (bench press, military press, maybe squats, ...), all for 3 sets of 12 reps. The weight was selected such that you lifted enough weight to fail on the last 1 or 2 reps of the 3rd set. You basically went up in weight whenever you felt like you could. Not really any rhyme or reason.

The problem is that this is not fun. Failing on the last rep is painful and requires a spotter to prevent an almost guaranteed injury. Furthermore, progress is slow. We didn't really have any goals and no plan to achieve them even if we did. As a result, I think I maybe went up from 90 lbs to 110 lbs on the bench press in the course of 3 months. By the end, I didn't really feel stronger.

Throughout college and up through last year, my resistance training was mostly the same. Except that instead of using the barbell for things like the bench press or squat, I used machines. Mostly because spotters are hard to come by, but also because I just didn't realize that machines are so bad for you. Of course, even if I had known, a lot of gyms these days don't have a big section devoted to free weights.

Finally, about 6 months ago, I realized that I wasn't getting any stronger at the gym (but it wasn't really a goal either) so I decided to just do body weight exercises at home. I started off doing a whole bunch of stuff: planks, side planks, pushups, dips, military press, squats, curls, and calf raises. I did 3 circuits, going through each exercise from between 12 and 16 reps. I did all of this in about 30 minutes. It was as much of a cardio workout as it was a resistance workout. I got about the same results (if not better) than I did after 6 months in the gym.

At some point though, I realized that the only way to push myself was to increase the volume of exercises. Given that my free time is finite, this meant splitting exercises across different days and cutting some entirely. I focused on the ones that worked the most muscles. By the end of January, I was up to 5 sets of 17 pushups, 5 sets of 25 squats, and 5 sets of 45 second planks (and 35 seconds left and right side planks), as well as curls,, military press and dips on alternating workouts.

It didn't take long before I realized that I'd have to continue to increase volume to keep improving. Either that or go back to the old way of doing things: adding weight.

So I did what I always do when I'm looking for a more efficient way reach my goals.

The Research
I really just wanted to keep getting stronger. It would seem to make sense that strength goes hand in hand with muscle size, which (as I've written about previously) goes hand in hand with weight loss. Strength is easy to measure. How much weight can you lift for how many reps? Muscle size and body fat % are a little more difficult to measure, and change so gradually that they're hard to use as a motivator.

So with that in mind, I researched how to get stronger. I stumbled across this article in Men's Journal which has been adorned with a fairly hyperbolic title ("Everything You Know About Fitness is a Lie"), but contains a lot of good information. It made me start to realize just why I had spent so much effort and time on various machines with so little to show for it.

For starters, strength should transfer to your life outside the gym. So, why are there so many machines and exercises devoted to training 1 or 2 specific muscles in a controlled rigid motion? Do the bench press on a machine and you'll get better at using that machine, but how much does it transfer to real life? Considering I'd managed to gain strength going from the gym to pure body weight exercises, I'd say not much.

I started reading articles about barbell exercises like the squat and deadlift. I'd never even heard of the deadlift before, but of all the exercises I've tried it's the one that directly translates to my actual life. Imagine something heavy on the floor. Now, pick it up safely. What could be more useful? The squat, I read, involves just about every major muscle group in the body. I realized that the military press (which is seated) is inferior to the overhead press (which is done standing) because it restricts motion and doesn't require you to stabilize your core.

I also learned that the key to getting strong fast is to lift heavy weights. And continuing to lift heavier every workout. No more 3 sets of 12-16 reps of the same weight for a month. Endurance training has its place, but it doesn't get you stronger. And finally, I learned that you don't have to workout to muscle failure to build muscle. In fact, focusing on fewer exercises and doing them well will give you the best results, and means you don't need to spend a lot of time in the gym. Especially while starting out.

The Plan
So, naturally I was itching for a way to use this information, but I didn't really have much experience with the barbell. And some of these exercises (deadlift and overhead press), I'd never done in my life.

My first step was finding a plan. The one I found has you starting with low weights (the 45 lb. bar) and gradually increasing by adding 5 lbs on every workout. It's called StrongLifts 5x5, and the website has a ton of information on what it is and how to do the 5 exercises involved. There is also a free e-book that you can find if you look around the site a little.

The basic gist of the program is that you lift 3 times a week. Each exercise is 5 sets of 5 reps (with 1-5 minutes rest between reps). Each workout you increase the weight by 5 lbs. You do 3 exercises on each day, squatting every day and alternating bench press and barbell row with overhead press and deadlift (which you only do 1 set of 5 reps since you're getting a leg workout with the squats). It's simple and you see results for a long time, while starting off at a low weight gives you time to learn the proper technique on all the exercises.

So, I figured out what I needed get strong, now I just needed access to the weights. I contemplated going to the gym. The problem is that I've gotten used to all the free time that working out at home has given me. So I went onto craigslist and got an olympic barbell, a squat half rack, a bench, and weights (and an assortment dumbell and curl-bar weights that I'll probably never use...) for $250. And now a big chunk of my garage is devoted to something that I never thought I'd have... or even want.

Well, it's been a little over a week since I've started and it feels pretty good. I've done 4 workouts and I've changed my diet to focus on lots of protein. Who knew hard-boiled eggs were so good (and so good for you)? It's weird going from a diet where you limit your calories to lose weight to a diet where you need excess calories to gain muscle mass, but I'm slowly transitioning. So far I feel pretty good.

I've decided to keep a workout journal to track my progress. Notice the low starting weight, the squats every workout, and the increase in weight every time.

At this point, I'm not sure how this will affect my running goals, but I don't have any intention of sacrificing one for the other. I'm sure that adding more weight will probably not make me a more efficient runner, but on the other hand, all of my increased mass will be in the form of muscle (if I do it right), which I think would probably be helpful. But the thing I'm most hopeful for is that muscle mass increases metabolism. One of my goals for this year is to show off a 6 pack, and you can't do that without dropping overall body fat %.

Isn't it nice when your goals reinforce each other?

And on that note, I think I'll stop. Once again, I've written a wall of text, so congrats (and apologies) if you made it this far. I plan on keeping the workout journal up to date, so you can follow it if you want. I'm sure I'll be updating in a month or 2 with an assessment of the plan (and probably pictures), so you can look forward to that as well. Talk to you later.