Flexible Asset Allocation

In my last post, I broke down the individual components to look at the performance of each factor. Although by themselves, the correlation and volatility factors weren’t that attractive, as a whole combined together, its a different story.

I’ve always been a proponent of simplistic approaches in system design as adding too many nuts and bolts to ensure sophistication only brings over-fit. In my opinion, when you are designing the alpha portion of your portfolio, you should look to design multiple simplistic strategies that are different in nature (uncorrelated). Take these return streams and overlay a portfolio allocation strategy and you will find yourself with a decent alpha generator with >1 risk return. Ok back to FAA…

Keller and Putten in their FAA system combined the signals of each factor by a simple meta rank function. This ranking function took the following form:

where m, c and v represents the factor rank of momentum, correlation and volatility respectively. Each factor is then given a weight. The meta ranking function is than ranked again and filter based on absolute momentum to arrive at the assets to invest in. Note that any assets that don’t pass the absolute momentum filter will be invested in cash (VFISX). When coding the meta ranking function, I found that there are times when some assets share the same final meta rank. This caused problem for some rebalance period when the assets to hold will exceed top N. I consulted with the authors and they revealed that “with rank ties, we select more than 3 funds.” Below is a replication of the strategy; it is tested with daily data as oppose to monthly data used by the authors.

FAA-perf faa-perf1

The model results are pretty decent. One aspect I may change would be the use of the cash proxy in the volatility ranking factor. By including the theoretical risk free rate that is suppose to have volatility of zero will skew the results to bias cash.

A reader commented on a little error in coding I made in the last post. Don’t sweat, it doesn’t change the performance one bit. I’ve modified the code and placed everything including the current code in to the FAA dropbox folder. Should you have any questions please leave a comment below.

Thanks for reading,

Mike

Alternative Momentum Factors

Keller and Putten in their 2012 paper, “Generalized Momentum and FAA”, went on to combine multiple momentum ranking factors to form portfolios rebalanced monthly. I won’t go in to detail about their strategy as you can find a good commentary at Turnkey Analyst.

Here I took apart each ranking factors and constructed portfolios to see their individual performance. I thought this may be a good way to visualize the performance of each factor alone.

There are four portfolios, rebalanced monthly.

1. Relative Momentum- holds top n performing funds

2. Absolute Momentum- holds funds with positive momentum

3. Volatility Momentum- holds the n lowest volatility funds

4. Correlation Momentum- holds the n lowest average correlation fund; average of all pairwise correlation

Performance

Equity Performance

</pre>
############################################################
#Flexible Asset Allocation (Keller & Putten, 2012)
#
############################################################
rm(list=ls())
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
source(con)
close(con)
load.packages("TTR,PerformanceAnalytics,quantmod,lattice")

#######################################################
#Get and Prep Data
#######################################################
setwd("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA")

data <- new.env()
#tickers<-spl("VTI,IEF,TLT,DBC,VNQ,GLD")

tickers<-spl("VTSMX,FDIVX,VEIEX,VFISX,VBMFX,QRAAX,VGSIX")
getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T)
for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)

bt.prep(data, align='remove.na', dates='1990::2013')

#Helper
#Rank Helper Function
rank.mom<-function(x){
 if(ncol(x) == 1){
 r<-x
 r[1,1] <- 1
 }else{
 r <- as.xts(t(apply(-x, 1, rank, na.last = "keep")))
 }

 return(r)
}
#######################################################
#Run Strategies
#######################################################

source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-mom.R")
source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-abs-mom.R")
source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-vol.R")
source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-cor.R")
source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-bench.R")
models<-list()
top<-3
lookback<-80

#run models
models$mom<-mom.bt(data,top,lookback) #relative momentum factor
models$abs.mom<-abs.mom.bt(data,lookback) #absolute momentum factor
models$vol<-vol.bt(data,top,lookback) #volatility momentum factor
models$cor<-cor.bt(data,top,lookback) #volatility factor
models$faber<-timing.strategy.local(data,'months',ma.len=200) #faber
models$ew<-equal.weight.bt(data) #equal weight benchmark
#report
plotbt.custom.report.part1(models)
plotbt.transition.map(models)
plotbt.strategy.sidebyside(models)
<pre>

The source codes can be downloaded in my DB folder,  can’t guarantee they are error free. Please leave comment of email me if you should find any mistakes.

Thanks for reading,

Mike

“Return = Cash + Beta + Alpha” -Bridgewater

What a coincidence, Zerohege just posted a piece where Bridgewater identifies the origin of their All Weather framework. (Here)

Its interesting to read about their thought process and here are a few quotes I found interesting:

“Any return stream can be broken down into its component parts and analysed more accurately by first examining the drivers of those individual parts.”

“Return = Cash +Beta + Alpha”

“Betas are few in number and cheap to obtain. Alphas (ie trading strategy) are unlimited and expensive. … Betas in aggregate and over time outperform cash. There are sure things in investing. That betas rise over time relative to cash is one of them. Once one strip out the return of cash and betas, alpha is a zero sum game. ”

“there is a way of looking at things that overly complicates things in a desire to be overly precise and easily lose sight of the important basic ingredients that are making those things up”

Separately managing the beta and alpha portion of the portfolio seems like a reasonable long term framework. For example, build a stable portfolio (beta) for the majority of your wealth and then overlay that with your desired amount of alpha to spice up the return. But it is important to make sure you understand how the two return streams (beta and alpha) interact fundamentally, for example factors that contribute to the return of the beta portion should be different compared to the alpha portion. It is only through this can the uncorrelated return stream diversify away your risk.

 

Structural Beta and Alpha

In a recent refresher of Dalio’s interviews, I came across a term he mentioned: “Structural Beta.” What is it and what insights can one gain from this concept? I went on to do some research and reading on the subject and here are a few things I found.

Beta defined by the CAPM is the slope of the linear regression between the Market Return (symbol) and the securities return. The measure takes in to account both the covariance (correlation) and the standard deviations. Mathematically,

Where subscript ‘a’ represents an asset and ‘m’ represents the market. From the above equation, we can see that there are two determinants to the value of beta.

1. market volatility

2. correlation between market and asset

With the above determinants, it is intuitive to note that although an asset may have low correlation, offering potential diversification benefits, it may still poses equal beta due to the volatility of its underlying returns.

There are two things that are important when constructing a portfolio, the return and risk. Return can be improved and risk can be reduced if a historically lowly correlated asset is added to the portfolio. But there are times like 2008 when things don’t follow historic averages. What I mean by this is that there can be assets that have low correlation but also high volatility. As an alternative, Beta can be used to gauge both of these characteristics. In a portfolio level context, beta may be used as an alternative measure of portfolio risk as it offers more information (correlation, volatility) than volatility alone in the traditional sense.

There are numerous different ways to measure portfolio risk and these metrics are used on a daily basis as ingredients to portfolio optimization that yields weights for portfolio allocation. But these simplistic measures, ie volatility, may mislead as it may potentially hide the true risk inherent inside the portfolio.

The chart below is a traditional Standard deviation based risk return graph. The expected returns are probably not representative as I just have 24 years of total return data; but I am confident the concepts are preserved.

Standard_RRThe next chart is through the beta lens whereby risk is measured by beta rather than standard deviation.

beta_risk-return

The blue lines in both charts are called the cash equity line while the horizontal line merely represents the risk free rate (proxied by SHY). If an asset is above the cash equity line, than the area inbetween the asset and the line represent what is called structural alpha. This type of alpha is not the typical alpha that is generated by skill; rather it is the return portion that is attributed to an asset itself. It offers great diversification benefit to a portfolio. The beta based return is the portfolio above the risk free line and below the cash equity line. This portion of return is theoretically replicable by a mix of cash and equity.

All in all, this view of portfolio risk return may warrant more research, for example, what happens when we long a portfolio of assets that show structural alpha? It is also important to note that in the past two decades, the assets that have shown to have diversification benefits all evidently lie above the cash equity line in the beta risk return chat. For example, the success of permanent portfolio was attributed to holding such assets.

Code for generating risk return given xts object. Package: PerformanceAnalytics, SIT

gen.risk.ret<-function(data1){
 data1<-as.xts(data1) #convert to xts
 ret<-get.roc(data1,1)
 returns<-compute.cagr(data1)
 risk<-apply(ret,2,sd)
 risk.ret<-cbind(risk,returns) #Standard Risk Return Matrix
 return(risk.ret)

}

gen.beta.ret<-function(data1,bm){
 data1<-as.xts(data1) #convert to xts
 ret<-get.roc(data1,1)
 returns<-compute.cagr(data1)
 bench<-ret[,which(colnames(ret) == bm)]
 risk<-matrix(NA,nrow=1,ncol=ncol(ret))
 for(i in 1:ncol(ret)){
 risk[,i]<-CAPM.beta(ret[,i],bench,Rf=0)
 }
 risk.return<-cbind(as.vector(risk),returns)
 rownames(risk.return)<-colnames(ret)
 colnames(risk.return)<-c("beta","returns")
 return(risk.return)
}

Hedge Fund Performance

This year equity performance has ended with a downward movement from this years earlier upward push. How have hedge fund styles from different categories performed? Below are a few charts I constructed from my schools indices.

Current-Month-Performance

 

YTD-Equity

The performance data are pretty representative compared to the strategies employed by hedge funds. Below are the historic equity curves of all the strategies back to 1994. Although the index is an aggregated performance of many different hedge funds, I feel that the hedge fund performance are effected by stress factors that are similar to equities.

Inception-Equity

Some research I finally have time to do relate to correlation tightening. This affect as seen in 2008 is effectively the enemy of diversification. Some questions I have been ruminating on are:

-If during stress periods asset classes returns share high correlation, what measures can be taken to reduce such risk?

-Which asset classes provide the most diversification during such periods and how do their return relate to equity like assets during normal times?

-Which asset classes on the other hand offers no diversification benefits in bad times?

In normal times, we are all hedge fund super stars as returns are achieved so easily due to upward drift. It is the times of market shocks that we should build our portfolio on.

 

Cheers

Optimal Stock Bond Allocation

It’s been more than a month since I last posted. Time flies when you are busy working on the things you enjoy.

After reading a piece on the lacklustre performance of hedge funds versus a standard 60/40 portfolio mix, it got me thinking deeper about stock bond allocation. In this post I am going to dissect and check the internal workings of the equity bond allocation and see if there are any tactical overlay that can improve a static allocation mix.

Data: I will be using monthly data from Data Stream and Bloomberg; SP500 and 10 Year treasuries, all total return from January 1988 to May 2012.

Here a backtest helper function wrapped around SIT:

require(TTR)
require(quantmod)

setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
source(con)
close(con)

btest<-function(data1,allocation,rebalancing){
  data <- list(prices=data1[,1:2])
  data$weight<-data1[,1:2]
  data$weight[!is.na(data$weight)]<- NA
  data$execution.price<-data1[,1:2]
  data$execution.price[!is.na(data$execution.price)]<-NA
  data$dates<-index(data1[,1:2])
  prices = data$prices   
  nperiods = nrow(prices)
  data$weight[] = NA  
  data$weight[1,] = allocation
  period.ends = seq(1,nrow(data$prices),rebalancing)-1 
  period.ends<-period.ends[period.ends>0]
  data$weight[period.ends,]<-repmat(allocation, len(period.ends), 1)
  capital = 100000
  data$weight[] = (capital / prices) * data$weight
  model = bt.run(data, type='share', capital=capital)
  return(model)
}

This simply runs the backtest for provided allocation and rebalancing period for 2 assets. To check the performance of all equity allocation from 0 to 1 in increments of n%, I will be using the following wrapper function:

sensitivity<-function(data1,rebalancing,allocation.increments){
  equity.allocation<-seq(0,1,allocation.increments) #Y-axis
  eq = matrix(NA, nrow=nrow(data1), ncol=1)

  for(i in equity.allocation) {
    allocation <- matrix(c((1-i),i), nrow=1)
    temp<-btest(data1,allocation,rebalancing)
    eq<-cbind(eq,temp$equity)
  }
  eq<-eq[,-1]
  colnames(eq) = 1-equity.allocation

  cagr<-matrix(NA,nrow=ncol(eq),ncol=1)
  for(i in 1:ncol(eq)){
    cagr[i]<-compute.cagr(eq[,i])
  }
  cagr<-as.data.frame(cbind(1-equity.allocation,cagr))
  colnames(cagr)<-c('Equity Allocation','CAGR')

  sharpe<-matrix(NA,nrow=ncol(eq),ncol=1)
  eq.ret<-ROC(eq)
  eq.ret[is.na(eq.ret)]<-0
  for(i in 1:ncol(eq)){
    sharpe[i]<-compute.sharpe(eq.ret[,i])
  }
  sharpe<-as.data.frame(cbind(1-equity.allocation,sharpe))
  colnames(sharpe)<-c('Equity Allocation','Sharpe')
  return(list(eq=eq,cagr=cagr,sharpe=sharpe))
} 

Running the sensitivity function in increments of 5% provides:

 

As you increase the equity allocation, you become more aggressive, which is obviously displayed from the chart above. What is the optimal allocation based on highest CAGR or Sharpe? The sensitivity function also returns a list of the performance of each equity allocation and the chart:

In the above chart, I’ve graphed two lines each with its own respective axis. From the chart, it seems that the equity allocation that provided the highest Sharpe Ratio is ~0.25. This seems to be similar to a risk parity allocation as historical data shows that such allocation is very close to optimal risk parity.

Diving deeper, I went in to check each successive 12 month period’s highest Sharpe equity allocation from 1988 to 2012. In another word, this takes us back in time!

 

 

From this chart, the max sharpe allocation varied significantly over each year. Whenever crisis hit, the allocation to bonds seems to dominate that on equity and vice versa in bull markets. This intuitively make sense as you would want to be in risk off mode during bear markets.

The last chart shows the rolling 12 month performance of each equity allocation from 0 to 1 in increments on 5%.

 

In another post, I will follow up on whether there are any tactical overlays that can improve performance.

 

 

Diversification through Equity Blending

In a sound asset allocation framework, it is never a good idea to over weight the risky portion of the portfolio. One example would be the traditional 60/40 portfolio whereby an investor allocates 60% to equities and 40% to bonds. Such allocation may intuitively makes sense as you “feel” diversified but when extraordinary events happen, you will be less protected. Below is the performance of the 60/40 allocation rebalanced monthly since 2003. Note I used SPY and IEF for the mix.

In this post, I would like to show some ideas that will reduce risk and increase return by bringing in a different type of return stream. Traditional asset allocation focuses mainly on optimal diversification of assets, here I will focus on allocation to strategies. From my own research, there are only so many asset classes the individual can choose to mix to form portfolios, not to mention the less than reliable cross correlation between assets classes in market turmoil (2008). To bring stability for the core portfolio, I will incorporate Harry Browne’s Permanent Portfolio. This return stream is composed of equal weight allocation to equities, gold, bonds. and liquid cash. For the more aggressive part, I will use daily equity mean reversion (RSI2). Note that a basic strategy overlay on an asset can produce a return stream that can have diversification benefits. Below are three different equity curves. Black, green and red represents, mean reversion, 60/40 equal weight allocation of both strategies, and permanent portfolio respectively.  

To summarize, I have taken two return streams derived from strategies traded over assets and combined them to form a portfolio. The allocation percentage is 40% to the risk asset (mean reversion/MR) and 60% to the conservative asset (Permanent Portfolio/PP). And here are the performance metrics.

Traditional represents the traditional 60/40 allocation to equity and bonds while B-H represents buy and hold for the SP500. This superficial analysis is only meant to prove the powerful idea of equity blending of assets and trading strategies. When traditional search for diversification becomes fruitless, this idea of incorporating different strategies can have a huge impact on your underlying performance.

I will come back later for the R code as its pretty late and I have class tomorrow!

Parameter Insensitive Models

In my opinion there are two enemies to successful system development. One is the “exploitability” of the anomaly  you are trying to extract profit from. The other is the parameters that you choose to exploit the anomaly with. The “exploitability” aspect is something you can’t have much control over as the profitability of any anomaly is in constant flux. One example is the profitability of trend following in general. When markets are choppy, its tough for any trend followers to extract sizeable profits.

The other area that you have absolute control over is the parameters with which you choose to trade with. The more varied the parameter selection, the more robust you are as the diversification increase will reduce probability of loss if any parameters were to suffer lack of performance. Parameters here can literally be the days you choose to employ a MA crossover strategy or it can extend to similar models like breakouts.

In the following experiment, I will test the performance of 5 different models. They are all mean reversion in nature.

Model1 (rsi1): RSI(2) 50/50

Model2 (rsi2): RSI(2) Buy: <30 Short: >70

Model3 (rsi3): RSI(2) Buy: <30 Sell: >50 Short: >70 Cover: <50

Model4 (no.reb): no rebalance but equal weight

Model5 (reb): equal weight rebalance weekly

Parameter insensitive models rest on the idea that no one knows what the future holds and how each parameter will perform. Instead of just relying on past data to select something that “was” consistent, parameter insensitive models try to avoid putting all eggs in one basket. The following is the equity curve of the strategy.

The focus should be on the bold equity curve which rebalances weekly. From the graph, it is very much correlated with the other equity curves, but it is smoother than individual strategies equity curve. What I am trying to convey here is that return to any strategy is attributed to the underlying health of the anomaly (something you cannot control) plus the efficiency of the parameters that are used to extract profit (something you have control over). The next chart is the drawdown

If we unfortunately chose to trade rsi2 (blue), our drawdowns will be markedly different.  Next a stacked horizon plot of rolling 252 day return

The first three are the models rsi1, rsi2, rsi3 and the third and fourth are no rebalance and rebalance. As you can see the overall performance is reduced, but in times when certain individual models underperform, the aggregate rebalancing model is able to mitigate it quite successfully.  An finally, the numbers…

One little experiment cannot prove anything. I am still trying the idea out in many different ways and hope that through further research, I will arrive at some more concrete conclusions.

# RSI Parameter insensitive Model
# test out rebalancing equal wait verses holding constant weight
# correctly set working directory and import in your own equity curves
# for blending, I passed in three equity curves for blending

data<-read.csv("rsi.csv") #set your own input files

#conversion to zoo object
data$date<-as.Date(data$date,"%Y-%m-%d")
rsi2.50.50<-zoo(data$rsi2.50,data$date)
rsi2.extreme<-zoo(data$rsi2.extreme,data$date)
rsi2.semi<-zoo(data$rsi2.semi,data$date)
data<-merge(rsi2.50.50,rsi2.extreme)
data<-merge(data,rsi2.semi)

names(data)<-c("rsi1",'rsi2','rsi3')
ret<-ROC(data)
ret[is.na(ret)]<-0

#normalize equity curves
ret$rsi1.equity<-cumprod(1+ret$rsi1) #simulated equity
ret$rsi2.equity<-cumprod(1+ret$rsi2)
ret$rsi3.equity<-cumprod(1+ret$rsi3)
ret$equity<-ret$rsi1.equity+ret$rsi2.equity+ret$rsi3.equity #add them together

ret$equity<-ROC(ret$equity)
ret$equity[is.na(ret$equity)]<-0
ret$equity<-cumprod(1+ret$equity)

rsi.equity1<-ret[,-(1:3)] #same allocation through time
rsi.equity2<-as.xts(rsi.equity1[,-4])

###############################
#Rebalancing of equity
###############################
# Load Systematic Investor Toolbox (SIT)
setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
source(con)
close(con)

#*****************************************************************
# prep input
#******************************************************************
data <- list(prices=rsi.equity2,
 rsi1=rsi.equity2$rsi1,
 rsi2=rsi.equity2$rsi2,
 rsi3=rsi.equity2$rsi3) #need to create new list to store stuff
#weight with n column as input data
data$weight<-rsi.equity2
data$weight[!is.na(data$weight)]<- NA
#execution price
data$execution.price<-rsi.equity2
data$execution.price[!is.na(data$execution.price)]<-NA
#dates
data$dates<-index(rsi.equity2)

#*****************************************************************
# Rebalancing Algo
#******************************************************************
prices = data$prices
nperiods = nrow(prices)
target.allocation = matrix(c(0.33,0.33,0.33), nrow=1)

# Rebalance periodically
models = list()

period<-'weeks' #change to whatever rebalancing preiod you want
data$weight[] = NA
data$weight[1,] = target.allocation

period.ends = endpoints(prices, period)
period.ends = period.ends[period.ends > 0]
data$weight[period.ends,] = repmat(target.allocation, len(period.ends), 1)

capital = 100000
data$weight[] = (capital / prices) * data$weight

#this works oly when your input prices are object XTS rather than zoo
models[[period]] = bt.run(data, type='share', capital=capital)

#*****************************************************************
# Create Report
#******************************************************************
#all 5 equity curves
equity1<-merge(rsi.equity1,models$weeks$equity)
names(equity1)<-c('rsi1','rsi2','rsi3','no.reb','reb')
equity1<-as.xts(equity1)

#print out all the strategies return
for(i in 1:ncol(equity1)
{
 ret<-ROC(equity1[,i])
 ret[is.na(ret)]<-0
 ret<-as.xts(ret)
 dd<-Drawdowns(ret,geometric=T) #
 print(compute.cagr(equity1[,i]))
 print(maxDrawdown(as.xts(ret)))
 print((compute.cagr(equity1[,1]))/(maxDrawdown(as.xts(ret))))
 print(compute.sharpe(ret))
}

ret<-ROC(equity1,252)
horizonplot(ret,origin=0,scales = list(y = list(relation = "same")),colorkey=T)

df<-equity1
df <- data.frame( time=index(df),rsi1=df$rsi1,rsi2=df$rsi2,rsi3=df$rsi3,no.reb=df$no.reb,reb=df$reb)
#plot Equity

ggplot(df,aes(df$time)) +
 geom_line(aes(y=rsi1,colour="rsi1")) +
 geom_line(aes(y=rsi2,colour="rsi2")) +
 geom_line(aes(y=rsi3,colour="rsi3")) +
 geom_line(aes(y=no.reb,colour="no.reb")) +
 geom_line(aes(y=reb,colour="reb"),size=1.2)
#plot drawdown
df1<-merge(dd1,dd2,dd3,dd4,dd5)
df1<-data.frame( time=index(df1),rsi1=df1$rsi1,rsi2=df1$rsi2,rsi3=df1$rsi3,no.reb=df1$no.reb,reb=df1$reb)

ggplot(df1,aes(df$time)) +
 geom_line(aes(y=rsi1,colour="rsi1")) +
 geom_line(aes(y=rsi2,colour="rsi2")) +
 geom_line(aes(y=rsi3,colour="rsi3")) +
 geom_line(aes(y=no.reb,colour="no.reb")) +
 geom_line(aes(y=reb,colour="reb"),size=1.2)

Systematic Edge Backtest Engine: plotting features

Here are some new additions to the plotting functions of the backtester. They are a great way of visualizing the performance of a trading strategy. As I wanted ease and simplicity, all of the following plots can be done in one line of code. Each image is followed by the code that  generated it.

plot.equity(equity,bench,testParameters)

 

plot.drawdown(perf,bench)

plot.horizon(equity,bench,testParameters,250)

plot.calendarHeatMap(equity)

The first two plots are very self-explanatory. The third plot is called horizon plot. It is a compact way of visualizing time series data. The intuition behind the way I use it is to chart rolling n (250 in my case) period performance of a strategy relative to its benchmark. The darker the red, the more underperformance, while the darker the blue, the more outperformance. If you refer to the plot and look at the 2008 crisis, you can see that in general, the colour of the strategy is lighter in red compared to the benchmark, showing hedging at its best.

The last chart is a calendar heat-map. It plots the n-period log return for the entire test period and shows you how it has performed in different periods of time. This plot is great for detailed consistency checking as it is a great way peeling in to the strategies performance as it cycles through time; a true time machine in my opinion.

A creative performance measure derived from the calendar heat map is the percent of daily return above a user defined return threshold. The threshold by definition should be the risk free rate. When doing strategy comparison, the higher the percent of days of out performance, the better.

Much more to come. Enjoy!

 

 

Systematic Edge Backtest Engine

In this post, I’d like to share some tools that the individual investors can use to backtest trading strategies. R is a powerful computing language that is composed of many statistical tools. I have taken and compiled several open source code and put together what I believe to be a easy to use backtesting interface. My functions act as wrappers so that it covers the unnecessary details for the end user.

The current frame work is very slow but it is very powerful as it has the capabilities of testing multiple strategies across multiple portfolios. For the user who want more customization, please visit my github page for the detail codes.

This is an example of simulating two trading strategies on a single portfolio; both are based on Faber moving average. First strategy is long when close > SMA(100) and for the sake of simplicity, the second one is when close > SMA(200). Note, the strategy is for demonstration purposes only. Here are the summary images created automatically.

The tools are currently in its infancy. There are a lot more features that I would like to add to the toolbox. The code is open source and I will organize it to put it up on github to share it. The following is the code that generated the backtest and it can be done in less than 20 lines of code! Any comments would be appreciated.

</p>
# Multi-Strategy Testing

#################################################################################
# 1. initialize backtest parameters
# (start date, end date, initial equity)
# and test name
#################################################################################
test.name<- "test1"
testParameters<-bt.testParameter("2011-01-01","2012-01-01",100000)
#################################################################################
# 2. create a list of symbols
#################################################################################
symbols<-c("SPY")
#################################################################################
# 3. get the adjusted data
#################################################################################
bt.dataSetup(symbols, "USD", 1, testParameters,adjust="True")

SPY.Strategy1<-SPY
SPY.Strategy2<-SPY

#################################################################################
# 4. configure indicator
# use ttr package to link indicators to each instrument
#################################################################################
SPY.Strategy1$SMA.100 = SMA(Cl(SPY.Strategy1), 100)
SPY.Strategy2$SMA.200 = SMA(Cl(SPY.Strategy2), 200)
n<-200
symbols<-c("SPY.Strategy1","SPY.Strategy2")

#################################################################################
# 5. Set up trading rules
# signal based traiding rule
#################################################################################

#for multi strategy
SPY.Strategy1$signal =(Cl(SPY) > SPY.Strategy1$SMA.100) + 0
SPY.Strategy2$signal =(Cl(SPY) > SPY.Strategy2$SMA.200) + 0
#################################################################################
# 6. Setup Portfolio and Account Object in Blotter
#################################################################################
bt.globalTestSetup(test.name, symbols, testParameters)
#################################################################################
# 7. Run Strategy
#################################################################################
bt.run(symbols,test.name,n)

#################################################################################
# 8. Performance Graphing
#################################################################################

equity<-getAccount(test.name)$summary$End.Eq
perf<-computePerformanceStatistics(equity)
bench<-generate.benchmark(c("SPY"),testParameters)
plot.calendarReturn(perf)
plot.equity(equity,bench,testParameters)

<strong>