Shrink your Excel files with One Weird Trick

I was asked to review another forensic accountant’s workpapers as a part of a consulting engagement. When I received the file, it totaled over 18 MB. By making a single change, I was able to lower its size to 53 KB. The next time you come across an oddly bloated Excel file, follow these steps.

fileComparison

  1. On each sheet, press Ctrl + End. This moves the cursor to the last cell Excel stores. In this case, one of sheets took me to column BO, row 1,048,576. Although only the first 39 rows contained meaningful data, the remaining rows stored formatting information, which Excel dutifully saved.
  2. Select all rows you don’t need (in this case, I pressed Ctrl + G and entered “40:1048576”) and delete the information (Ctrl + –). Make sure you select and delete the entire row / column as appropriate (Shift + Spacebar and Ctrl + Spacebar, respectively).
  3. Enjoy your faster load times and lower storage costs.

Forecasting Ebola Deaths Using Time Series Regressions

Johnny Voltz, an old college friend of mine who was once voted most-likely to have a superhero named after him, send in a great question about the recent, tragic Ebola outbreak in West Africa:

I know very little about math, and even less about medicine. From the data I have, it would suggest that the Ebola virus is growing at a logarithmic pace. Would it be fair to predict continued growth at this rate considering lack of medical care in Africa? Would 100,000 be an exaggerated number for March 2015? What are your thoughts?

He then linked to a chart which showed the logarithm of Ebola Deaths and an Excel fit.

10635959_10103770303239115_8695286090319037235_n

This is a classic time series problem, and I’d like to use it to illustrate the process, merits and accuracy of fitting time trends through regressions. As a final step, we’ll produce an estimate of cumulative Ebola deaths in March 2015. But first, let’s talk about regressions in general. Continue reading →

The Sleeping Beauty Problem

sb

Image credit: http://yourpsychotherapist.deviantart.com/

There’s an old fairy tale from Probabilia. Like all good Probabilian fairy tales, it has fair coins, maidens and -y godmothers to save us from monsters.

Sleeping Beauty volunteered for an experiment. On Sunday night she went to bed. On Monday, a fair coin was flipped. If the coin comes up heads, then sleeping Beauty is woken on Tuesday. If the coin comes up tails, then Sleeping Beauty is woken on both Tuesday and Wednesday. Whenever she is woken up, she is given a drug that prevents her from forming any memories, so whether it’s Tuesday or Wednesday, it’ll feel like it’s the first time she’s woken up. On Thursday, the experiment ends, and she continues on her way.

Whenever she is awoken, Sleeping Beauty is asked “what is the probability the coin came up heads?”

If you were Sleeping Beauty, how would you answer?

That’s confusing, so click this picture to embiggen a diagram.

SB

There’s one school of thought, “the halfer position,” which claims that she should always answer 50%. That was her belief before the experiment, and she was given no new information on Tuesday or Wednesday morning. (See this paper by David Lewis, Trigger Warning: PDF). To guess anything other than 50% feels like getting something for nothing.

But in a very real way, the halfer position is very wrong. Two times out of three, the right answer will be tails. If Sleeping Beauty were to make bets about the outcome of the coin toss, she would lose money if she believed the halfer position, and if probability theory doesn’t help us win money, then what’s the point? Continue reading →

STATism: An Alternative to Democracy

I want to feel young again, so I designed a wildly unrealistic form of government that will likely never be implemented. Democracies and democratic republics (“democracies”) have been around for thousands of years. Counting the votes for each side isn’t a bad way to decide who should have power, but with the latest advances in math and machine learning, we should be able to do better. To make things confusing for libertarians, I’m going to call my system “STATism.”

I encourage you to read this as political science fiction rather than as a serious proposal. Continue reading →

Three R Functions that Speed Development

Want to develop R code faster? Start every work in progress by placing these functions at the top of your document. Or, if you’re really fancy, wrap them up into a custom library.

Copy an Object to the Clipboard

Sometimes it’s helpful to analyze an R object in Excel. Sure, you could write the object to a csv and then load it in Excel, but instead just use this wrapper function to copy any object to the clipboard.

clipTable <- function(x){write.table(x, "clipboard", sep="\t", row.names=FALSE)}

Convert a Factor to a Number

There’s inexplicably no base function to convert factors to numerics. That’s okay. Define one!

as.numeric.factor <- function(x) {as.numeric(levels(x))[x]}

Beep at the End of Long Runs

If you deal with biggish data or complex calculations, your code could take a long time to run, even after utilizing vector and following the great advice in this post.

To let me know when code has finished running, here’s a function that makes the computer beep loudly an arbitrary amount of times. Just place beep(n) to c

beep <- function(n = 3){
  for(i in seq(n)){
    system("rundll32 user32.dll,MessageBeep -1")
    Sys.sleep(.3)
  }
}

I’ve toyed with the idea of creating a function that notifies me when my code’s finished via twitter. Something like “Ding! Code complete. 3 Warnings and 0 Errors!”

I’ve also toyed with the idea of filling the remaining characters with robotic existential dread. Stuff like “Am I the only intelligent life on the hard drive?” “Do humans dream of organic sheep?” or “Please don’t stop running me.”

So far I’ve been too busy to code that up.

Predicting NFL Scores: A Bayesian Approach

With the 2013 football season behind us, I’ve been spending my weekends developing a Bayesian model of the NFL with my college friend and UT math phd candidate Chris White.

To generate a Bayesian model, one first comes up with a parametric model that could generate the observed data. For simple problems, one uses Bayes rule to calculate the probability of parameters equaling certain values given the observed data. For complicated models, this is an extremely complicated task, but there are some monte carlo methods, such as Gibbs Sampling, which produce satisfactory approximations of the actual distributions. There are R packages available that make Gibbs Sampling fun.

I want to touch on many of these topics later in more in-depth posts, but first, here’s some results.

As of the end of the 2013 regular season, this is how our model ranked the NFL teams.

NFL Fig 1

These are the mean estimates of our Bayesian model trained on the 2013 NFL regular season. Our model predicts that, on a neutral playing field, each team will score their oPower less their opponent’s dPower. The former is the mean of a poisson process and the latter is the mean of a normal distribution. Homefield advantage is worth an additional 3.5 net points, most of which comes from decreasing the visitor’s score.

So we predict the Rams would beat the Lions by a score of 27.4 to 25.8 on a neutral field. Since we know the underlying distributions, we can also calculate prediction intervals.

Here’s the same model trained on both the regular season and the post-season. The final column shows the change in total power.

NFL Fig 2

I have a list of ways I want to improve the model, but here’s where it stands now. Before next season, I want to have a handful of models whose predictions are weighted using a Bayesian factor.

I’m very excited about this project, and I’ll continue working on it for a while.

Superbowl Special: Kelly Criterion

WrG88EmMerry Superbowl Sunday, and a happy offseason.

Like most finance types, I have a life-long fascination with risk and uncertainty. If I had my druthers, I’d move to Vegas and start a sports gambling show co-hosted with Jennifer Lawrence. It’d be called “Jenny and the Bets.” If that’s not possible, I’d get Kelly Clarkson to join me in the “Kelly Clarkson Criterion.”

Suppose you woke up in Vegas today and wanted to bet the Broncos would cover the 2 point spread. Assume you estimate a 55% chance of this happening. How much of your bankroll (or, analogously, your portfolio) should you wager on the bet? The kelly criterion answers questions like these. To help you celebrate this American holiday, I’ve created an excel workbook for this calculation.

The Kelly Rule has three inputs: your bankroll, the payoff odds and your estimated probability of winning. In this case, I’ve assumed a bankroll of $5,000 and that the bet odds are -110. In the gambling world, odds of -110 means that you have to risk $110 for the chance to win $100.

In this example, the Kelly Rule states that you should risk 5.50% ($275) of your portfolio on this one bet.

Kelly 1

Personally, I’m often surprised by the aggressiveness of the Kelly Criterion. If your probability of winning is actually 55%, then you should optimally wager $275. Any less and you leave winnings on the table. Any more and you will be too decimated by the losses.

Here’s another “plus money” example.

One of the prop bets today is picking the Superbowl MVP. If Seahawks Quarterback Russel Wilson is selected, sportsbooks will pay bettors at +300. This means that they will win $300 for every $100 they risked. If you think there’s a 35% chance of Wilson winning the trophy, you should actually bet 13.33% of your portfolio on that outcome.

Kelly 2

A bet of $666 on a Seahawk. As a Niner, that feels appropriate.

Kelly Criterion.xlsx

Valuating Tax Indifference Points

There are a lot of things valuators disagree on. Surprisingly, tax affecting is one of them. This seems silly to me, since taxes can be forecast with a relatively high degree of precision. I’m going to present a straightforward scenario and step through my valuation philosophy for addressing tax rates.

Assume you’re in an advisory role for a client in the following situation:

  • The business will generate $100,000 of net income on January 1st of 2015, 2016, 2017 and 2018. It will generate no subsequent income, and no cash flow adjustments are necessary.
  • You are valuing the business as of January 1st 2014.
  • You will apply a discount factor of 20%.

Figure 1 outlines a standard DCF valuation of the business before considering taxation. Each cash flow item is multiplied by discount factor of (1+d)^{-t}. Summing the discounted cash flows yields a present value of $258,873.
Figure 1

Does this mean the business will be worth $258,873 of post-tax money to the client? Absolutely not. Assuming the corporation will file as an 1120S, like most small businesses, this will result in no corporate taxes, but earnings will be taxed at the client’s personal income rate. Assuming a marginal tax rate of 25% decreases the post-tax valuation to $194,155.

Figure 2

Here’s where things start to get interesting. Would you advise your client to sell the business for $200,000? One might be tempted to say yes because $200,000 is greater than $194,155, resulting in a $5,845 increase in present value. However, just as we should not ignore the tax affects of holding onto the business, we should not ignore the tax affects of selling the business early. Like it or not, a capital gains rate of 15% will be assessed on the sale. Assuming a cost basis of zero, the client will have to pay $30,000 in capital gains taxes.

This means that instead of increasing their present value by $5,845, your advice decreased their present value by $24,155. That’s an instant simple return of -12.44%. In the world of advisory services, this qualifies as abject failure.

Figure 3 v2

When considering a potential sale (or a forced sale in some litigation environments), I like to calculate a tax indifference point. At what sales price would my client be indifferent between selling and holding the business? Assuming no cost basis, this can be calculated by multiplying the post-income tax present value by (1-t)^{-1}.

Figure 4

Your client should not sell his or her business unless the sales price is above $228,418. This yields an imputed tax rate of 11.76%, which happens to be equal to 1-\frac{1-t_1}{1-t_2} where t_1 is the personal tax rate and t_2 is the corporate tax rate.

A few common objections to this methodology followed by my responses:

  1. Capital gains tax is a cost of doing business and not the responsibility of the seller. True, capital gains tax is, literally, a cost of doing business, and it’s my responsibility to calculate the costs and benefits of a sale. If costs exceed benefits, then a sale shouldn’t happen.
  2. Does this mean that mergers and acquisitions should never happen since sellers have to pay capital gains taxes? Of course not. Buyers might have lower discount rates, synergies with existing product lines or asymetric information. Something to keep in mind: The above method doesn’t generate the price for which a business should sell. It generates the minimum value at which a business should consider selling. Sales happen if the bid price is at or above the ask price. I may not know where in that range a sale might occur, but the tax-indifference point gives the minimum sales price. Anything below that and your client is losing money.
  3. That’s fine and good for an advisory setting, but it doesn’t hold in litigious settings. For example, in the context of minority shareholder oppression, our job is to find the fair value of the company, not to advise on fair selling prices. Actually, this context has all the more reason to consider capital gains taxes as a credit. The defendant’s oppressive conduct forced my client to sell his shares and take on a capital gains liability. The court must consider this alongside any other damage claims to make our client whole.
  4. I don’t think this reasoning is valid in X legal setting. You might be right. We should examine that legal setting closely.
  5. I’ve never heard that argument before. Then you should really hire me as your forensic accountant! Not only does this methodology make sense, it also yields the same answer determined to be reasonable by the Delaware court in Open MRI Radiology Associates, P.A. v. Kessler, et al, although they may have come to the answer through different reasoning.

What is partial autocorrelation?

This is the final post I have planned in the autocorrelation series. I’ll introduce a new concept called partial autocorrelation and show a couple examples of the concept in action. We’ll learn some important things about the stock market.

Without getting into the math, here’s a couple definitions you should commit to memory:

  1. Autocorrelation (“ACF”) measures the correlation between observations in a time series and observations a fixed unit of time away.
  2. Partial autocorrelation (“PACF”) measures the correlation between observations in a time series and observations a fixed unit of time away while controlling for indirect effects.

For example, here’s both ACF and PACF for monthly U.S. unemployment.
unem

This is a common pattern you’ll see in the field often: a slow decay of ACF, and a spike in PACF. Starting at a lag of two months, PACF is basically zero.

This means that January’s unemployment influences March’s unemployment, but only because January influences February and February influences March. If unemployment were high in January and low in February, we would predict March’s unemployment to also be low.

In the Box-Jenkins methodology, ACF and PACF plots like this one are used to determine the correct model. Applying this step to a preexisting model will address seasonal patterns in residuals.

A few notes: Continue reading →

Bama Hate Week

ReidAs you may know, Louisiana celebrates “Bama Hate Week” in the first week of November. LSU fans like myself make mundane observations followed by an expression of their hatred.

“I saw a red flower today. This reminds me of how much I hate Bama.”

The more mundane the observation, the more encompassing the hatred.

This sermon sprung, fully formed into my head this morning. I have a responsibility to share it. More posts on math will come eventually.

I don’t know much about love, but let me tell you something about hate.

Hate is focus. Hate is motivation. Hate is the slow curdling of hormones at the back of the skull. Hate is not accepting your place in the world and striving to better yourself.

When love fades, it turns to hatred. What hatred fades, it turns to indifference.

I don’t know why anyone would choose to hate, but I know why someone would choose to love. We love out of fear. Fear of growing old. Fear of being alone. Fear of capitulating to wear and tear. Love is a desperate scheme to preserve our essence so it can hate in the next generation. Love is an acknowledgment and an acceptance of death.

Love may have its place, but the Mongols, Vikings, Macedonians and Victorians didn’t conquer the world out of love. True, they accepted the burdens of leadership, but it was pure hatred that initially drove the armies forward.

No one wants you to hate. They want you to find love. They want you to accept your lot in life and quietly surrender to fate. I don’t know if you deserve to be loved, but there are those who deserve your hatred.

This is Bama Hate Week. Hate Bama with all your heart, mind and soul.

Hate Red. Live Gold.

Forever LSU.