“If all you have is a hammer…” so the old adage goes. As it turns out, I have a nice big one, and a corresponding anvil. One is called Monte-Carlo Markov Chain (MCMC), and the other is called the method of Maximum Likelihood. I’m going to talk about the first one today, then later on I’ll explain how the two can come together to form ~~voltron~~ a generalized method to estimate almost anything.

Monte Carlo modeling is actually really simple when you think about it. Often, you’d like to know the result of some process or experiment where a general or analytical solution is really hard. Let’s say, for example, you want to know what will happen if you add an extra road to a city, or change the configuration of some intersections. Let’s make it interesting – say we have control over the light timing of all the intersections in a city, and we’d like to know how to increase traffic throughput, making the roads more efficient.

So, one way to do this is to try to model the traffic as a fluid, or somesuch, and attempt to make all sorts of assumptions about the average traffic density and how it affects the way that fluid flows. On the other hand, this is likely to lead to you assuming or approximating a whole bunch of totally unknown dynamics – you’re essentially trying to guess what the emergent behavior of traffic will be before you’ve really understood it. So, a much ~~easier for the ignorant~~ more elegant approach is to try to model a single car and describe its behavior with a set of parameters that, while variable, nonetheless reference better understood behaviors. So things like the top speed this car will go if uninterrupted, the set of places it may be trying to get, how close it is willing to be to another car, and how likely it is to switch lanes if the next lane is moving faster, can all be quantified. You don’t necessarily know the real values at this stage, of course, but you write them in as variables. We call these the Input Parameters, and they might include other things of interest, like which day of the week is being modeled or whether it is raining, which in turn might set other internal variables.

Then, you use all these parameters as input into a simulation. Here’s where the “Monte Carlo” part comes in. See, “Monte Carlo” is several things: A casino in Monaco, the region surrounding it, and a technique using random chance to generate synthetic data sets or histories. Don’t panic – Monte Carlo analysis is easier than learning to play solid poker and you’ll probably make more money because of it. Basically to do it we begin by simulating one experiment – in this case, simulating one “day of traffic.” To do this we can use our favorite code base – we use all of the Input Parameters and generate a set of hypothetical cars. Let’s say they begin in random places and proceed according to the rules we set out above. “But where does the gambling come in?” I hear you cry. Well, in two places: First of all, my hypothetical traffic has a lot of random elements in it. Where the cars start out, how quickly one driver might step on the gas, or where accidents happen that day all have randomness contained in them. So to simulate the entire “experiment” and come out with the whole behavior of traffic, we will need to roll some dice. In the field of high-energy astrophysics, where I first cut my teeth on this technique long ago, the randomness was all about the angle from which a photon from space entered the telescope and where it interacted with the materials within.

Okay, so you’ve input your parameters defining the world and cooked up a simulation that turned that into one hypothetical experiment (in this case, looking at a day of traffic and seeing how many cars went where). Sounds like a result, right? Well, the careful reader will notice that I said the “gambling” happened in two places. You see, in a choice between being a player and a casino, you’d always choose to be the casino. This isn’t just because the odds are slightly in their favor, but also because they play thousands of “games” an hour, such that their winnings are basically averaged out. Even with a slight loss in odds, as a player you could win or lose, but as the house, you basically always win over time due to the statistics of high numbers. Explaining exactly why may be the work of another post, but for now, suffice to say that when you run a Monte Carlo simulation, you also want to be the “house” and do many simulations to “average out” your result. In general, this usually means picking some observable quantity (e.g. the number of cars passing a particular point in an hour) and then running many simulations to find a good mean. And as they often say, a good mean is hard to find.

So here we have it – input parameters go in, random “world simulation” gets driven by them and iterated many times, nice average values come out. And we didn’t need to know beforehand how the whole system worked, just individual bits. But how will we know that our input parameters were right? Or, more to the point, what if those parameters were* the very thing we wanted to know in the first place*? Well, the answer comes when you bring the hammer called “Monte Carlo” together with the anvil called “Maximum likelihood” and pound out some real-world estimates. Stay tuned for more about that in an upcoming post.