Matrix Multiplication in Excel — Markov Chains

This post is part of the “Matrix Multiplication in Excel” series. It’s composed of a math introduction, a silly interlude and an interactive tutorial (you are here). By the end of the series, you’ll learn how to perform Markov Chain calculations, which are used in some damage calculations.

Now that we know some of the basics, I’m going to introduce a toy problem where one would use matrix multiplication, step through how to calculate it in Excel and then give an example used forensic economics.

Click here to download Matrix Multiplication.xlsx.

Scenario 1: Ping Pong Problem

Suppose two ping pong players (labelled left and right) are tied near the end of a game. In order to win, one of them has to have a two point lead. Suppose that, on any given volley, the left player has a 55% chance of winning.

The game can be thought of as having five states:

  1. (2,0) – The left player wins.
  2. (1,0) – The left player is winning by one.
  3. (0,0) – The game is tied.
  4. (0,1) – The right player is winning by one.
  5. (0,2) – The right player wins.

Graphically, the transitions between these states can be seen as follows.

ppFig1

The left player increases his score with the probability p(LW), in this case 55%. How do we calculate the probability that the left player wins? Continue reading →

Forecasting Ebola Deaths Using Time Series Regressions

Johnny Voltz, an old college friend of mine who was once voted most-likely to have a superhero named after him, send in a great question about the recent, tragic Ebola outbreak in West Africa:

I know very little about math, and even less about medicine. From the data I have, it would suggest that the Ebola virus is growing at a logarithmic pace. Would it be fair to predict continued growth at this rate considering lack of medical care in Africa? Would 100,000 be an exaggerated number for March 2015? What are your thoughts?

He then linked to a chart which showed the logarithm of Ebola Deaths and an Excel fit.

10635959_10103770303239115_8695286090319037235_n

This is a classic time series problem, and I’d like to use it to illustrate the process, merits and accuracy of fitting time trends through regressions. As a final step, we’ll produce an estimate of cumulative Ebola deaths in March 2015. But first, let’s talk about regressions in general. Continue reading →

Grand Theft Autocorrelation

Michael-Counting-Money-16_RGBIn my last post, I discussed the concept of correlation. In my free time since then, I’ve been playing a lot of Grand Theft Auto V. It’s time to merge these noble pursuits.

As you may know, GTA V includes an online stock market that allows players to invest their ill-gotten gains in fictitious companies. Naturally, a Reddit user has created an updating database of Stock Market prices. I play on a 360, so I’ve analyzed the Xbox prices. I’ve developed a strategy that will earn money in the long-run, but first let’s do some learning.

As we previously discussed, correlation is a measure of how well the highs of one series line up with the highs of another series. Autocorrelation is a measure of how well highs of a series line up with the highs of the previous observation of the same series. Continue reading →

What is Correlation?

Besides not being causation, many pedantically smart laymen don’t know what correlation is. I’m here to fix that with a mathematical, an intuitive explanation and a brief philosophical comment.

Intuitive Explanation

Correlation is a quantitative measure of how well the highs line up with the highs and the lows line up with the lows between two arrays. Correlation will always be between -1 and 1, inclusively. A correlation of 1 indicates a linear, positive relationship between two variables. A correlation of zero indicates no correlation. A correlation of negative one indicates a perfect, negative relationship.

Mathematical Definition

This value can be computed in Excel through the CORREL() function, but stepping into the formula helps enhance the understanding. Feel free to skim over this part to the applications and philosophy sections. Mathematically, correlation can be computed as follows: Continue reading →

What is Expected Value?

A random variable is a value which takes different values with certain probabilities. Per math convention, random variables are capital letters while values random variables may equal have lower case letters. Deeper questions like “what is probability” may be addressed in a future post.

Suppose you have a random variable X that can take any of the following values:

rvX

Working through the first line, there is a 75\% chance that X will equal \$ 10. The product of these two figures, \$ 7.50 is known as a “partial expectation.” The sum of all partial expectations, \$ 13.50 is known as the expected value of X, or E[X].

For discrete random variables (which can take a finite number or countably infinite number of values), this is denoted.

E[X] = \sum_{i} p_i \cdot x_i

Where p_i is the probability of scenario i occurring, and x_i is the value of scenario i. Examples of discrete random variables include the number of days a stock will increase in a row, the number of deposits in a bank account in a month or the sum of two rolled dice. Note that the first two are theoretically infinite.

For continuous random variables (which can take an infinite number of values), expected value is denoted as:

E[X] = \int\limits_{-\infty}^{\infty} x \cdot f(x) dx

Where f(x) = P(X = x) (the probability X = x. Examples of continuous random variables include the earnings of a company, the dollar value of deposits in a month or the time until someone in a family flips over a Monopoly board. Note that the lattermost is only theoretically infinite.

So that’s a mathematical explanation of expected value. What’s an intuitive explanation?

The expected value of X is the weighted average of x across all possible scenarios where the weights are based on the probability of a scenario occurring. This isn’t strictly accurate for the continuous case (since the probability of any specific individual outcome occurring is zero), but the intuition still applies.

Expected values are important because if you simulate X an infinite number of times, then you will average a return of E[X]. This is important for valuation because if you believe a cash flow will be worth X, then you will break even in the long run if you pay E[X] for it. If you need to make a profit to compensate yourself for opportunity cost, then you will have to pay less than E[X].

In modern finance, diversification and securitization have made expected value an increasingly important concept since it is far easier to own a large number of assets. However, as far as valuation is concerned, it is worth pointing out that expected value is but the first “moment” that can be used to describe a distribution. Higher moments (such as variance, skewness and kurtosis) play an important role, especially in financial contexts when diversification is limited.

When valuing companies, it is standard practice to develop “best, worst, likely” scenarios and subjectively determine probabilities for each outcome. The company is then valued as the expected value of each of the three discounted cash flows. This is valid as long as the discount rate adequately covers the higher moment concerns I alluded to above.

Next, you can expect to see a post on the mathematical rules of working with expectations.

Are Equities Riskier than Bonds?

Because individual companies are inherently riskier than entire nations, investors will apply a greater discount to future cash flows. This means that, if the business plan works out, investors will achieve a higher return on their invested capital. That’s why it is common to assume an equity risk premium when building a discount rate for companies. Valuators usually start with yields on government bonds and then add an additional risk premium for securities. A simple model suggested in the 2012 Ibbotson Valuation Yearbook is a 2.48 percent riskless discount rate (20-year U.S. Treasury Coupon Bond Yield) and a 6.62 percent equity premium (large company stock total returns minus long-term government bond income returns) with additional premiums that can be added for size, risk, industry, et cetera.

An article in February’s print copy of The Economist challenged this standard formulation. It is entitled Beware of the bias: Investors may have developed too rosy a view of equity returns. I’m open to the idea that investors are too rosy about equity returns, but I find almost every example cited spurious. I’ll quote liberally from the article and proceed point by point. Emphasis added throughout. Continue reading →

Arithmetic vs. Logarithmic Rates of Return

Say you hold a stock as it increases from $100 to $105. Usually, this is reported as a return of 5%. The formula for this return (which we’ll call arithmetic) is as follows:

r_\alpha = \frac{FV}{PV} - 1 = \frac{FV - PV}{PV} = \frac{105}{100} - 1 = 5 \%

This simple definition of return serves us well for most uses, but there are some quirks that make arithmetic returns difficult to use in some academic and valuation settings. For example, continuously compounded arithmetic returns are not symmetric. If a position appreciates 15% and then depreciates 15%, the total change is -2.25%.

FV = PV(1 + r_\alpha)

FV = PV(1+.15)(1 - .15) = PV(0.9775)

\frac{FV}{PV} - 1 = 0.9775 - 1 = -2.25 \%

To avoid this quirk, practitioners sometimes use log returns, which are defined as follows: Continue reading →

Unit Roots and Economic Recovery

This may seem like an esoteric subject, but the concept of “unit roots” has implications for practitioners who debate time series forecasts.

Consider the following two models of economic growth:

y_t = y_0 e^{\theta t}\varepsilon_t
y_t = y_{t-1} e^{\theta}\varepsilon_t
Where y_t is GDP at time t, \theta is a long-term exponential growth rate, and \varepsilon_t is a stochastic driver with mean 1.

The first model multiplies an initial value by an exponential growth trend and a random variable. For every y_t, the \varepsilon_t term will make y_t be above or below the long-term trend. However, these deviations are temporary and will have no impact on y_{t+1}.

In the second model, the most recent observation is multiplied by the growth rate and a new error term. Like in the previous model, y_t is affected by growth and random shocks. However, rather than \varepsilon_t generating a temporary deviation from a trend, \varepsilon_t generates a deviation from the trended prior observation. Because y_{t-1} is dependent on \varepsilon_{t-1}, y_t depends on all prior shocks. In this model, shocks have a permanent impact on future y.

The first model is said to be trend-stationary because the expected value and variance is identical throughout for all t after adjusting for the trend. The second model is harder to forecast. It will resemble a random walk with an upward drift.

The above graph from wikipedia shows what a trend-stationary and unit-root time series might look like after a negative shock. The blue line is trend-stationary and will return to the long-term trend. The green line has a unit root and will continue to drift upwards at the previous growth rate.

Surprisingly, there isn’t a consensus among economists over whether economic growth has a unit root. For example, in March 2009, Paul Krugman and Greg Mankiw had a public disagreement over whether GDP has a unit root. Mankiw believed the damage caused by the financial crisis would be persistent. Krugman entitled his response “roots of evil.” Although Krugman never took Mankiw’s subsequent offer to wager, Mankiw likely would have won the bet. The sustained recession appears to be the predictions of the Unit Root model playing out.

In the less-glamorous world of litigation support and business valuation, I’ve seen the following graph used to argue that area businesses were due a high growth trend as the local economy would soon be “reverting to the trend” but for the business interruption.

This is a valid point if, as the expert was implicitly assuming, the time series is trend-stationary. However, if Corporate Income Tax Revenue has a unit root, then there is no long-term trend to which the economy can revert.

The most common way to test whether a time series is trend-stationary is to use a Dickey-Fuller test. In my opinion, applying this test should be one of the first things an aspiring analyst does before working on a time series. Specifying the wrong model when a unit-root relationship is present can lead to spurious results and bad forecasts.

What is a discount rate?

Receiving $200 is strictly better than receiving $100. Receiving $100 now is strictly better than receiving $100 in a year. But it is not clear whether it is better to receive $90 now or $100 in a year.

Similarly, it is always better to receive $100 with absolute certainty than to receive $100 80 percent of the time. However, some would rather receive $79 with absolute certainty than $100 with 80 percent certainty.

This is a problem that must be solved in order to make perform discounted cash flows (or make simple investment decisions). A company’s value will vary not only with the magnitude of future cash flows, but when the cash flows occur and with what level of risk.

For valuation practitioners, both problems are usually solved through the discount rate or required rate of return. The discount rate is the rate of return for which an investor is indifferent between investing and not investing. For example, if one can achieve a 4 percent return without risk, then one would be indifferent between receiving $100 now and $104 a year from now since $100 could be invested at the risk free rate. If an investment is riskier, then investors will require a higher rate of return to set aside money.

Applying a discount rate to a future cash flow yields a present value via the following formula:

Where:

  • PV is the present value. This is the current value of the future payment.
  • NCF is the net cash flow. This is the amount of money the investor will receive after netting all expenses. Higher cash flow yields a higher value.
  • d is the discount rate or required rate of return, as discussed above. If investors demand a higher rate of return, then they will pay less for a future cash flow.
  • t is the time until the payment. Investors will pay less for payments further into the future.

According to the discounted cash flow methodology, the value of a company is equal to the present value of all future payments to investors.

Valuation Methods

All valuation methodologies can be classified into one of three approaches:

  1. Income Approach
  2. Asset Approach
  3. Market Approach

The Income Approach values a business by predicting cash flows to the owners. After making adjustments for fringe benefits, non-recurring events and future taxes paid (tax affecting), the analyst must apply a discount rate that adjusts future dollars into current dollars. If one can receive a risk-less 2 percent annual return, then one is theoretically indifferent between receiving $100 today and $102 a year from now. For risky investments, such as equity ownership, this discount rate will be far higher. In real terms, this means investors will demand a higher return in order to invest money into the business. A promise of $100 from a treasury bond is worth more than a promise of $100 from a business. The following factors might make a company riskier than the average equity position:

  • High financial leverage
  • Small size
  • Participation in a risky industry

Asset-based approaches to business valuation values a business as the sum of its parts less outstanding liabilities. Theoretically, a business owner would not be willing to buy a business for more than the cost of acquiring the individual parts of the business. For asset-intensive and distressed businesses, this may be a reasonable approach. However, many modern businesses consist of intangible assets that are hard to value and/or impossible to buy such as:

  • Specialized employees.
  • Goodwill with customers.
  • Reputation and brand value.
For this and other reason, many companies trade or sell at a multiple of their asset-based value.

Market approaches to business valuation attempt to use the value of comparable companies as a guideline to the subject company’s value. Comparable market valuations are commonly obtained through two methods:

  1. Market Capitalization — Market capitalization is the outstanding value of all publicly traded shares.
  2. Similar transactions — Some private data sources aggregate voluntarily-reported selling prices of privately held businesses.

Once similar companies have been identified, the analyst can scale their valuations to the subject company through any of numerous multiples, including:

  • Price to Sales Ratio — The ratio of the business’s value to gross revenue, total sales less no expenses.
  • Price to Book Ratio — The ratio of the business’s value to the asset based approach.
  • Price to EBITDA Ratio —The ratio of the business’s value to earnings before interest, taxes, depreciation and amortization. This can be computed by adding depreciation and amortization back to net income.

Numerous other approaches exist and are occasionally applied both to small companies and large, publicly traded companies.