Monte Carlo Simulation for your Portfolio PL

‘Once you have a system, the biggest obstacle is trusting it.’
Tom Wills

‘What is the last thing you do before you climb on a ladder? You shake it. And that
is Monte Carlo simulation.’
Sam Savage, Stanford University

 

Introduction

In my early days of looking at trading strategies, getting to the equity curve felt like the final piece of the puzzle. The piece that would provide enough perspective so that you could make the decision whether or not to proceed to production. I was always more comfortable with the scientific approach than my gut feel. Perhaps this was a function of my limited time in the market. After collaborating with Brian Peterson and studying the many great articles written by he and so many others in the community it became clear the equity curve was only the first piece of the puzzle.

Questions one should ask subsequent to generating a complete backtest are:

  • Is the result realistic?
  • Could we have overfit?
  • What range of values can we expect out of sample?
  • How much confidence can we have in the model?

A backtest generally represents limited information. It is merely one run of a particular strategy whose out-of-sample performance could be ambiguous. Sampling from this limited information however may allow us to get closer to the true process, thereby adding statistical significance to any inferences made from the backtest. mcsim() gives us the potential to do just that, simulating portfolio PL from either post-trade or backtested transactions, as well as providing multiple options for tweaking the sampling procedure according to strategy characteristics.

Whilst mcsim() has been alive and well in blotter for a while now (blotter is the open source transaction infrastructure package in R for defining instruments, accounts and portfolios for trading systems and simulations) i am yet to write a post about it. My prior posts here and here were a prelude to the superior functionality now available with mcsim().

 

mcsim()

Included in the documentation for the mcsim() function is an example of its use with the longtrend demo. I will refer to the mcsim() output of that strategy for this post. mcsim() comes with 10 parameters which i describe next.

  • Portfolio, Account

These are objects stored in your blotter environment. If you are using post-trade cash PL in your simulation then these parameters can be left as their default NULL values. If using the longtrend demo for example you can use “longtrend” for both.

The Account object is used solely for retrieving your initial equity amount specified at the start of your strategy. The Portfolio object is used for retrieving dailyEqPL or dailyTxnPL (both existing blotter functions) depending on the ‘use’ parameter, which we cover a few parameters down.

  • n

‘n’ is used to specify the number of simulations you would like to run on your backtest. The longtrend help file uses 1k simulations. This takes me ~15secs on my Intel i5-4460. Disclaimer: no optimisation or refactoring was carried out on mcsim()…this is a TODO item. In my experience 1k simulations almost certainly achieves an asymptotically normal distribution with an acceptable level of statistical significance. For investigating the general shape, however, 100 simulations should suffice.

  • replacement

Any sampling procedure can generally be run with or without replacement. Sampling without replacement means results will have the same mean and final PL but different paths. Sampling with replacement results in a wider distribution of the various paths the strategy could have taken, as well as results which vary final PL and the other sample statistics (maxDD, stddev, quasi-Sharpe). This differentiaion will be clear when running mcsim() for a given strategy with and without replacement, and calling the plot and hist functions on the mcsim object the results were assigned to. See the examples in the help docs, which we cover a little further down.

  • use

The ‘use’ parameter lets you specify whether or not you would like to use daily-marked portfolio PL, transaction PL (think intraday strategies) or cash PL if using an external xts object of post-trade returns. The method used in Tomasini & Jaekle (2009), Chapter 4 resembles that of sampling with use=’txns’.

  • l

The ‘l’ parameter (lower case L) stands for ‘length’ and allows you to sample blocks of PL observations. Assuming your strategy exhibits some form of autocorrelation it may make sense to incorporate a block length in your sampling procedure based on this observed autocorrelation. Alternatively some inverse multiple of your average holding period could also justify a value for block length ‘l’.

  • varblock

‘varblock’ is a boolean, which allows you to sample with a variably sized block length. If replacement=TRUE, the sampling is based on the tsboot() function in the boot package which you will note is a dependency for running mcsim(). The tsboot() function was the perfect solution for sampling time series, but it only allows for sampling with replacement which should give a sense of what the general preference is from a statistical perspective. When ‘varblock’ is set to TRUE together with ‘replacement’ the resulting block lengths are sampled from a geometric distribution with mean ‘l’. If replacement=FALSE the distribution of block lengths is uniformly random around ‘l’.

  • gap

For strategies which require leading market data before computing indicators on which to base signals and subsequent rules, users can use the ‘gap’ parameter to specify how many periods from the strategy initialization date to start the simulations. For longtrend we use a gap of 10 since it uses an SMA lookback period of 10 (monthly) observations.

  • CI

CI or Confidence Interval is the parameter used to specify the confidence level to display in the hist() function which charts a range of sample statistics for the strategy. Default is 95%.

  • cashPL

This parameter is used when running mcsim() on post-trade cash PL, and refers to a separate xts object created by the user.

 

Output

There are currently 4x S3 methods related to mcsim(), all of which return some portion of the ‘mcsim’ object as either a plot, hist or table (summary and quantile). For each method you can choose to set normalize=TRUE or FALSE. This will simply define the return space your simulations operate in…either cash or percent. Depending on the strategy duration and your reinvestment assumptions, there could be pros and cons to either approach. The distributions should be fairly similar in any event. In addition to ‘normalize’ there is also a ‘methods’ parameter for hist() which lets you specify the sample statistics to return in histogram plots. There are currently 5 stats, namely: ‘mean’, ‘median’, ‘stddev’, ‘maxDD’ and ‘sharpe’. The default is to return all methods as 5 separate histogram plots. This may be inappropriate when setting replacement=FALSE since all sample statistics barring maxDD will be identical (in the case of cash returns).

[Note: There are numerous benefits to using S3 methods, including for example that calls to plot() and hist() dispatch the appropriately defined methods related to mcsim(). In fact a total of 20 slots are returned with a call to mcsim() and can be analysed directly in your R environment. The Help file covers the 20 values returned from a call to mcsim().]

Looking at the first example in the Help file we see the following assigned call to mcsim where replacement is FALSE: and stored in ‘nrsim’ for ‘no-replacement simulation.’

nrsim = mcsim("longtrend", "longtrend", n=1000, replacement=FALSE, l=1, gap=10, CI=0.95)

 

plot()

Calling plot() on the mcsim assigned object ‘nrsim’ returns the below xts plot (which will differ slightly on each simulation):

plot(nrsim, normalize=TRUE)

nrsim.plot

The same plot() call with normalize=FALSE yields a plot that converges to the same final PL.

plot(nrsim, normalize=FALSE)

nrsim.plot.normalizeFALSE

The reason the plot with normalization does not converge to the same final number is purely a function of how mcsim() normalization is carried out on the replicates in the cash return space. Since sampling without replacement means sequences are not perfectly independent, using percent returns for without replacement may be a useful medium between cash returns with and without replacement which fan out and converge respectively. Below is an example of the same call, however this time with replacement;

wrsim = mcsim("longtrend", "longtrend", n=1000, replacement=TRUE, l=1, gap=10, CI=0.95)
plot(wrsim, normalize=FALSE)

wrsim.plot.normalizeFALSE

 

hist()

From the range of histogram plots mcsim() generates we can infer expectations of the distribution of values for the 5 methods: mean, median, stddev, maxDD and sharpe. These are based on the frequency of the ‘use’ parameter except for maxDD. If use=’equity’ for example then the frequency is daily ie. distribution of daily mean PL, median PL, stddev PL and daily quasi-sharpe (we use quasi as the formula is not a true implementation of the original Sharpe ratio). Using user-specified confidence intervals (and thanks to the Central Limit theorem) it is possible to visualize the range of values that can be expected from the backtest for the sample statistics in question. In presumably most instances the distribution will be centered around the backtest. However, in the case of the max Drawdown the simulation mean and the backtest could be more significantly different. Let’s look at some output from the longtrend demo.

hist(rsim, normalize=FALSE, methods="mean")

rsim.mean.hist.normalizeFALSE.PNG

hist(rsim, normalize=FALSE, methods="stddev")

rsim.stddev.hist.normalizeFALSE.PNG

hist(rsim, normalize=FALSE, methods="maxDD")

rsim.maxDD.hist.normalizeFALSE

hist(rsim, normalize=FALSE, methods="sharpe")

rsim.quasiSharpe.hist.normalizeFALSE

It is evident from the above charts that the Monte Carlo simulations result in an asymptotic normal distribution of the sample statistics (thanks to the Central Limit Theorem). In all cases excluding maxDD the distribution is centered around the mean, with the range of values for a 95% confidence interval quite clear to view. In the case of maxDD, the backtest has a max Drawdown of 30k, whilst the mean is closer to 40k. A strategy with any form of risk management should have a better max Drawdown than the average of the replicates, which are randomized re-ordered sequences of PL observations. In Tomasini & Jaekle (2009) the authors infer that the 99th percentile max Drawdown could be an expected outcome for your strategy with a 1% probability. Assuming your backtest employs a risk management strategy i would argue this probability is well below 1%. Nevertheless, viewing the results with this level of conservatism may be prudent especially considering the likely presence of many other biases not accounted for.

 

print() – recent update thanks to Brian, which tidies up the output by rounding to 3 decimals. Note that the simulation is different to the results generated in the charts above, as the update was done post the publishing of this post –

To interrogate the values from the charts more directly, a call to print() (which summarizes the output from the summary.mcsim() S3 method) will suffice. Below is an example with the subsequent output (note all sample stat outputs are returned):

print(rsim, normalize=FALSE)

print.rsim

We can now see the exact values for the sample statistic metrics.

 

 

 

Monte Carlo and mcsim() limitations and disadvantages of Portfolio PL simulations 

As with any model there are assumptions. In the case of Monte Carlo analysis using mcsim() one of the most significant assumptions is that returns follow a Gaussian (normal) distribution. Depending on the distribution of the sample statistic of interest, you may be able to propose a more suitable distribution. The exponential impact of ‘fat tail’ events also cannot be underestimated, with one of the world’s earliest and most successful hedge funds succumbing to these limitations.

Another limitation of Monte Carlo analysis is the sample from which it simulates may be overfit or overly optimized for in-sample out-performance. It is the job of the analyst to manage this risk though, and hopefully with the recently added quantstrat functions degfreedom(), deflated.Sharpe() and haircut.Sharpe() the analyst will be more empowered to manage this risk. For a useful demonstration of their use in quantstrat see the macdParameters demo.

Sampling from Portfolio PL may also be unrealistic since no consideration is given to market regimes and strategy-specific stylized facts such as the individual positions’ time in and out the market. Where this data is available however, the analyst has another recently developed tool at their disposal, namely txnsim()This will be the subject of my next post.

Thanks for reading.

Thanks must also go to my co-author, Brian Peterson, from whose work and commitment all of what i have written would be impossible. Brian’s invaluable insights related to this and so many other topics have benefited the industry to no end. Another legend worth a mention is Joshua Ulrich who never gave up on me (or never gave away that he had) w.r.t my supposed git ignorance. The R/Finance open source community is strong and i am honoured to be a part of it.

Advertisements

7 thoughts on “Monte Carlo Simulation for your Portfolio PL

  1. Pingback: Quantocracy's Daily Wrap for 08/09/2017 | Quantocracy

  2. Great work. Thanks for adding this to blotter/quantstrat.

    A very basic question: what exactly is “post-trade cash PL”? Can you please give some examples. Thanks.

    Like

    • Thanks Ham. If you wanted to simulate a time series not generated in a blotter environment, such as P&L from production trades or backtest results from another system, then you could specify use=’cash’ and assign the relevant xts object to the cashPL parameter, ie. use=’cash’, cashPL=’your.trading.PL.xts.object’. As long as the xts object you want to simulate is in your Global Environment and in the correct format (ie. xts object with daily PL for example) then you can simulate away. Hopefully that helps?

      Like

      • Thanks for your answer. I understand it is an xts object generated outside of blotter. Unfortunately my question is even more basic. I do not understand the term “post-trade cash PL”. 🙂 What I am asking is what exactly it is. I mean, let’s say I start with $100 of initial equity on 31.12.2016 and make the following transactions (all in the same contract): buy 1 lot @10 on 01.01.2017, sell 1 lot @ 11 on 02.01.2017, buy 1 lot @9 on 07.01.2017, sell 1 lot @8 on 15.01.2017, buy 1 lot @6 on 21.01.2017, sell 1 lot @7 on 29.01.2017, … What is “post-trade cash PL”? Is it: 31.12.2016 100, 01.01.2017 100, 02.01.2017 101, …, 07.01.2017 101, …, 15.01.2017 100, …, 21.01.2017 100, …, 29.01.2017 101, … ?

        Like

  3. Hi Jasen,

    Nice post on what looks like a useful package. Just one point in your conclusion: “In the case of Monte Carlo analysis one of the most significant assumptions is that returns follow a Gaussian (normal) distribution.”

    That’s not actually true. MC simulation is fully general as a methodology and requires no assumption on return distribution. The effect of CLT on the distribution of the sum of the random variables at the end of the time horizon is very much a separate issue. And even CLT is probably not true for much of financial return data due to fat tails and dependence.

    Regards,
    Emlyn

    Like

    • Hi Emlyn, thanks for the note. You are indeed correct and I have updated the post to correctly refer to the assumption of normality as a limitation of mcsim’s usage of MC analysis. Regards, Jasen

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s