# Grand Theft Autocorrelation

In my last post, I discussed the concept of correlation. In my free time since then, I’ve been playing a lot of Grand Theft Auto V. It’s time to merge these noble pursuits.

As you may know, GTA V includes an online stock market that allows players to invest their ill-gotten gains in fictitious companies. Naturally, a Reddit user has created an updating database of Stock Market prices. I play on a 360, so I’ve analyzed the Xbox prices. I’ve developed a strategy that will earn money in the long-run, but first let’s do some learning.

As we previously discussed, correlation is a measure of how well the highs of one series line up with the highs of another series. Autocorrelation is a measure of how well highs of a series line up with the highs of the previous observation of the same series. Continue reading →

# What is Correlation?

Besides not being causation, many pedantically smart laymen don’t know what correlation is. I’m here to fix that with a mathematical, an intuitive explanation and a brief philosophical comment.

### Intuitive Explanation

Correlation is a quantitative measure of how well the highs line up with the highs and the lows line up with the lows between two arrays. Correlation will always be between -1 and 1, inclusively. A correlation of 1 indicates a linear, positive relationship between two variables. A correlation of zero indicates no correlation. A correlation of negative one indicates a perfect, negative relationship.

### Mathematical Definition

This value can be computed in Excel through the CORREL() function, but stepping into the formula helps enhance the understanding. Feel free to skim over this part to the applications and philosophy sections. Mathematically, correlation can be computed as follows: Continue reading →

# Using Game Theory to Predict if the U.S. will Bomb Syria

I’m good at some things, and foreign policy isn’t one of them. I have no idea if Syria actually used chemical weapons, how the U.S. should respond to any potential attack or the probability of attacks spilling over into larger conflicts. I don’t understand Russia’s incentives or how they will react to any acts of aggression. However, I have created an excel book that calculates mixed strategy equilibria, and I will use this as a game theory example.

As a study, game theory uses mathematical models to predict the actions of rational decision-makers. The world is complex and noisy. If we want to know anything, we must make simplifying assumptions. People aren’t actually rational, but we can measure where they fail and model imperfect agents. A simpler first-pass is to assume they rationally maximize “utility” and assign utility to all possible outcomes.

As a simple model of the situation, let’s assume that the U.S. and Russia can either Waver or Stand Firm. Let’s assume they make this decision simultaneously and once. Their payoff is defined by the following table:

As an example on how to read this, if both countries waver, then the U.S. will receive -20 utility and Russia will receive 40 utility. If the U.S. stands firm and Russia wavers, then Russia will receive a payoff of 0 and the U.S. will receive a payoff of 30.

All of these values are editable in the excel file attached below. These are subjective estimates that I admittedly just made up. A bit of reasoning on the choices I made.

• I’ve assumed the U.S. wants to give Iran a strong example of what happens to countries that use WMD’s of any sort. This means the U.S. would rather stand firm than waver, assuming Russia would not escalate the conflict.
• I’ve read speculation claiming Russia has an economic interest in keeping Assad in power, as his government has made it more difficult for Qatar to sell natural gas to Europe.
• To make the solution more interesting, I made these payoffs asymmetrical.
• I’ve made the outcome of both countries going to war large and negative under the assumption that even a proxy war in Syria would be very bad for both sides.

The first thing my workbook does is check to see if any strategies weakly dominate the other. A strategy weakly dominates another strategy if a strategy is always as good or better than the alternatives regardless of how the opponent plays. In this game, each country would rather stand firm if the other wavers or waver if the other stands firm, so no strategies dominate.

Next, the workbook looks for Nash equilibria. A Nash equilibrium is a situation where neither player would unilaterally change his position. In this case, there are two Nash equilibria at the top right and bottom left. In these boxes, neither player can improve their outcome by changing their strategy. The top right equilibrium is better for the U.S. and the bottom left is better for Russia. But which equilibrium will we likely end up in?

In this case, probably neither. The next thing my workbook does is, if no strategies weakly dominate each other, then it will solve for the mixed Nash equilibrium. A mixed strategy is when a player chooses a probability distribution across all possible strategies. In the mixed Nash equilibrium, each player chooses a distribution of strategies that leaves the opponent indifferent between their strategies. Feel free to play with the payoffs and watch as the mixed solution changes.

In this case, the U.S. will waver from conflict roughly 96 percent of the time and Russia will waver roughly 91 percent of the time. In equilibrium, Russia and the U.S. will fight over Syria less than half a percent of the time.

The final sheet of the workbook analyzes the outcome for both players, including the expected value of the outcome. Based on this simplistic analysis, it looks like the U.S. will not bomb Syria and, even if it does, it is unlikely Russia will escalate the conflict. This is a good thing for the world’s total utility.

This analysis could be furthered by having the U.S. and Russia play several rounds where they choose to Waver or stand Firm. Any improved utility estimates are welcomed.

Here’s a link to the excel book that generated these reports. Of course, this workbook can be used in other contexts such as pricing decisions in a duopoly, figuring out how to make your roommate do the dishes, analyzing the read option, modeling evolutionary equilibria or solving your Game Theory homework.

# Dice Rolls

In a Straussian attempt to prove personality is genetic, my little brother texted me the following math question:

So imagine we have a die of n sides. We roll the die until it rolls a 1, the number of times it is rolled is the output. But, after each roll, we give the die one less face. What does the distribution of outcomes look like?

When I come across problems like this, I like to answer the question intuitively before solving for the actual answer. My brother and I both guessed what the distribution looked like. We both thought there’d be a very low chance of n or 1 being returned, and a higher chance of a number in the middle being returned. I thought the mode of the distribution would be lower than him.

Perhaps because probability wasn’t beneficial in the anscestral environment, we were both wrong. The distribution is actually perfectly uniform. Elementary math will show that the probability of rolling the first few numbers is exactly the same:

$\frac{1}{n} , \frac{n-1}{n} \frac{1}{n-1} , \frac{n-1}{n} \frac{n-2}{n-1} \frac{1}{n-2}$

I tested this empirically and produced the following histogram.

Here’s the R Code that generated that:

##Run Parameters
sides <- 100
runs <- 100000

Simulation <- as.data.frame(matrix(data=0,nrow=sides,ncol=runs))
for (n in 1:sides){
Simulation[n,] <- floor(runif(runs,1,sides-n+2))
}

Simulation <- Simulation==1

Outcomes <- vector(mode="integer",length=runs)

for (n in 1:runs){
Outcomes[n] <- which.max(Simulation[,n])
}

hist(Outcomes,breaks=20)

# Cruising with Minimal Wallet Bruising

How fast should you drive on long road trips?

One common piece of financial advice is to drive slowly. Per FuelEconomy.gov, “gas mileage usually decreases rapidly at speeds above 50 mph. You can assume that each 5 mph you drive over 50 mph is like paying an additional $0.25 per gallon for gas.” That’s a big deal, and it can lead to some significant savings. However, this piece of advice ignores a giant cost of driving slowly: it takes more time. Therefore, I’ve created back-of-the-envelope cost estimates at various speeds. This is part of my ongoing quest to optimize literally everything. Without further ado, here’s a graph of the total cost of driving 500 miles at a sustained cruising speed. I’ve assumed leisure time is worth$30 per hour, and I’ve used the same fuel assumptions specified at the above link. With these parameters, the cost of the entire trip is minimized at 95 miles per hour.

All of these can be adjusted in the workbook attached below. For instance, changing the value of leisure time to $10 per hour changes the optimal speed to 67 miles per hour. I was somewhat surprised to see the cost of time completely dominate the cost of fuel at the interstate speed limit. Fuel is expensive, but you can always buy more of it. Amusingly, one can use Excel’s goalseek function to back into the value of time of the losers you pass in the right hand lane. Someone driving at 55 miles per hour is implicitly valuing their time at$5.64 an hour. As a persistent leadfoot, I was comforted by this analysis. However, I can already hear my mother begging me to exercise caution. Those who drive quickly are more likely to get into an accident, and that’s worth considering.

Fortunately, I’m not the only person to ask this question. Wikipedia led me to a discussion of the Solomon Curve, a graphical representation of driving risk. Apparently, the log of crash risk has a quadratic relationship with driving speed. Drive either above or below the median speed, and your risk of crashing exponentially increases.

Table 7 of the original paper presented a fairly detailed risk of injury and death depending on the driving speed and the time of day (day vs night). I used this data, the aforementioned relationship and two regressions to develop risk curves for day-time and night-time driving. This is old data, and someone else could probably improve the results by looking at more contemporary sources. However, it suits my back-of-the-napkin purposes.

To translate the risks of crashes into the expected value of crashes, I had to assume costs for both injuries and deaths. For injuries, I assumed $50,000. After insurance, that’s probably a little high, but it’s good enough for our purposes. The next assumption I had to make is slightly more complicated. How does one value the threat of losing their life? What cost do you assign to daylights, sunsets, midnights and cups of coffee forever squandered in a driver’s overzealous haste to get to the next red light? Sure, in 1999, the EPA estimated a life’s value at$5.5 million while calculating the value of air regulations, but just because I’m an economist doesn’t mean I’m a monster. Who am I to blindly annoint a dollar value to an anonymous other’s very existence? It would go against everything I hold dear to say lives are worth $5.5 million each. First, we must adjust it for inflation. In 2013 dollars, one life is worth roughly$7.7 million. Moving on…

Due to the quadratic and exponential nature of speed and collisions, the right side of the graph quickly balloons. With the original assumptions, the back of my envelope recommends going 65 miles per hour in the day and 60 miles per hour at night. For my non-American readers, that’s roughly 105 kmh and 97 kmh, respectively. For my pirate readers, that’s 56 and 52 knots, respectively.

The day chart is provided below. Please note that the x-axis has been restricted to 90 miles per hour, unlike the prior graphs.

Of course, all of this assumes you have a radar detector, an average driving ability and an average love of life. The Solomon Curve is anchored on the speed of your fellow travelers, not the speed of an old data set. Take everything into account, and don’t be afraid to disregard FuelEconomy.gov.

Total Cost of Trip

Edit to add: A friend of mine let me know another potential ending. The other possible moral of this story is that everyone should drive at 95 miles per hour. Balloonfoots are imposing a negative externality. How did I miss that?

# Monday Math Problem #1: Expected Births

My colleague Max Lummis became a father over the weekend, so in order to wrap up the recent series of posts on expected value, here’s a birth-themed, three-part math problem for your Monday afternoon.

I’ll give a hat-tip to the person who introduced me to the problem after posting the solution in a few days. Feel free to reference Wolfram and the rules of expectations.

Suppose there’s parents that can have infinite children who really want a boy. They will have kids until they have a boy. The probability of any given child being a boy is 50 percent.

1. What is the expected number of boys?
2. What is the expected number of girls?
3. What is the expected percentage of the children that will be girls?

# The Rules of Expectations

As promised, here are some rules for working with expected values, first in words and then in math. All of these can be verified in excel. I’ll also include a list of things that are not rules for working with expectations. These are all especially relevant in valuation if you have multiple uncertain parameters you are combining through probabilities. These also come into play while making total rows at the bottom of complex schedules. Sometimes they won’t foot, and this post will help you understand why.

The expected value of a constant is a constant.

$E[c] = c$

The expected value of two random values is equal to the sum of the expected values.

$E[X + Y] = E[X] + E[Y]$

The expected value of a random variable times a constant is equal to the constant times the expected value of the random variable.

$E[cX] = cE[X]$

Please note that this doesn’t hold if c is not a constant.

$E[XY] \neq E[X]E[Y]$

Remember these rules, as you will be tested on them.

# What is Expected Value?

A random variable is a value which takes different values with certain probabilities. Per math convention, random variables are capital letters while values random variables may equal have lower case letters. Deeper questions like “what is probability” may be addressed in a future post.

Suppose you have a random variable $X$ that can take any of the following values:

Working through the first line, there is a $75\%$ chance that $X$ will equal $\ 10$. The product of these two figures, $\ 7.50$ is known as a “partial expectation.” The sum of all partial expectations, $\ 13.50$ is known as the expected value of $X$, or $E[X]$.

For discrete random variables (which can take a finite number or countably infinite number of values), this is denoted.

$E[X] = \sum_{i} p_i \cdot x_i$

Where $p_i$ is the probability of scenario $i$ occurring, and $x_i$ is the value of scenario $i$. Examples of discrete random variables include the number of days a stock will increase in a row, the number of deposits in a bank account in a month or the sum of two rolled dice. Note that the first two are theoretically infinite.

For continuous random variables (which can take an infinite number of values), expected value is denoted as:

$E[X] = \int\limits_{-\infty}^{\infty} x \cdot f(x) dx$

Where $f(x) = P(X = x)$ (the probability $X = x$. Examples of continuous random variables include the earnings of a company, the dollar value of deposits in a month or the time until someone in a family flips over a Monopoly board. Note that the lattermost is only theoretically infinite.

So that’s a mathematical explanation of expected value. What’s an intuitive explanation?

The expected value of $X$ is the weighted average of $x$ across all possible scenarios where the weights are based on the probability of a scenario occurring. This isn’t strictly accurate for the continuous case (since the probability of any specific individual outcome occurring is zero), but the intuition still applies.

Expected values are important because if you simulate $X$ an infinite number of times, then you will average a return of $E[X]$. This is important for valuation because if you believe a cash flow will be worth $X$, then you will break even in the long run if you pay $E[X]$ for it. If you need to make a profit to compensate yourself for opportunity cost, then you will have to pay less than $E[X]$.

In modern finance, diversification and securitization have made expected value an increasingly important concept since it is far easier to own a large number of assets. However, as far as valuation is concerned, it is worth pointing out that expected value is but the first “moment” that can be used to describe a distribution. Higher moments (such as variance, skewness and kurtosis) play an important role, especially in financial contexts when diversification is limited.

When valuing companies, it is standard practice to develop “best, worst, likely” scenarios and subjectively determine probabilities for each outcome. The company is then valued as the expected value of each of the three discounted cash flows. This is valid as long as the discount rate adequately covers the higher moment concerns I alluded to above.

Next, you can expect to see a post on the mathematical rules of working with expectations.

# All that Glitters is not an Inflation Hedge

As you probably know, gold has undergone a precipitous drop lately. Less than a month ago, it opened at $1,613.75 per Troy ounce. Today it opened at$1,380 per Troy ounce, a 15 percent decrease.

That’s a big decrease, but there’s a lot further it could fall.

There’s no way I can speculate on what the price will be in the future. Buying gold now might end up being a great idea, or it might end up being a terrible idea. However, I would like to address a remarkably common misconception: that gold is an inflation hedge.

Buying gold because you think it will protect you from inflation is absolutely a terrible idea. Here’s the graph from above adjusted for inflation (through the GDP Implicit Price Deflator, last updated October 2012).

If gold were a good inflation hedge, we’d expect the line to be relatively flat, or maybe steadily increasing.

Instead, the price occasionally has large spikes during times that are perceived as crises. Gold is a hedge against panic, and when panic subsides, so does gold.

With that thought in mind, I hope gold continues to fall in price.

# Charitable Discount Rates

Alternatively titled: It’s my charity, and I want it now!

Robert Wilbin at OvercomingBias recently wrote a short post arguing that when evaluating charitable options, we should apply discounts to costs and we should not apply discounts to benefits. This topic makes me uncomfortable, but I’d like to make three quick observations that might be helpful to the charitably-minded. [Edit: It appears as though the article has moved to Giving What We Can.]

First, people have a preference for helping others sooner rather than later. All else held equal, I would rather help a stranger today than a stranger a year from now. Assuming there’s countless children who constantly need saving, I’d rather save a million children now than a million and one different children a year from now. I suspect that’s the way the charitable feel, and I don’t think any amount of writing will change that. I treat preferences such as these as exogenous features to be maximized rather than parameters to be changed.

Robert acknowledges this point. He writes that:

Time preference appears similar to arbitrary prejudices regarding whose interests count that are generally rejected today, such as racism and sexism.

I’m uncomfortable with any reasoning of the form: “All good people admit that Y is bad. Y is like X. Therefore, X is bad.” Having a preference for your family is like racism. Most people like for their family to do well. Does that mean familial affection is bad or that racism is good? I reject both conclusions and instead reject making moral judgments based on surface similarities.

People will prefer charitable schemes that save lives now instead of later. If you want to produce a cost-benefit analysis useful to people, then you should take that preference into account.

Second, if you believe in the “pay it forward” principle, good deeds will multiply. The returns will not be realized by you, but that’s why it’s called charity instead of business. To make this warm and fuzzy concept more real, imagine you save a child from starving and he grows up to become a fisherman who keeps other children from starving.

Finally, I believe this conclusion will lead people to never give to charities. If a charitable fund can either donate $100 now or invest the money and donate$100*(1+g) in a year, they’ll always take the latter if they do not discount the benefits of charity. By recursion, this leads to a conclusion that is ridiculous to humans.