Standard Errors of Estimated Performance Measures

Một phần của tài liệu Introduction to statistical methods for financial models (Trang 276 - 285)

The estimated portfolio performance measures we have discussed are just estimates of a portfolio’s “true” performance measures. Therefore, when inter- preting such results, it is important to assess the uncertainty in these estimates by calculating their standard errors.

For some statistics, such as a sample mean or an estimated regression parameter, calculating the standard error is straightforward. For instance, for the average mean return ¯Rp, the standard error is given bySp/√

T, whereSp is the sample standard deviation of the returns and T is the sample size. In some cases, the standard error of a statistic is given as part of the R output of the function used to calculate the statistic. For instance, the standard error of Jensen’s alpha, the estimated intercept parameter in the market model, is available from the regression output from estimating the market model using thelmfunction.

One role of the standard error of an estimate is in calculating an approx- imate confidence interval for a parameter; for instance, an approximate 95%

confidence interval forμp is given by R¯p±1.96Sp

√T.

Example 8.18 Consider the returns on the Vanguard U.S. Growth Portfolio, which are stored in the variable vwusx. Output from estimating the market model using thelmfunction includes the table

Coefficients:

Estimate Std. Error t value Pr(>|t|) (Intercept) 0.000377 0.001687 0.22 0.82 sp500 1.122536 0.043467 25.83 <2e-16 ***

Therefore, the estimate of Jensen’s alpha for this fund is 0.000377 and the standard error is 0.001687, leading to an approximate 95% confidence interval of

0.000377±1.96(0.001687) = (0.00293,0.00368).

For more general statistics, such as an estimated Sharpe ratio, simple expressions for the standard error are not available. Hence, here we con- sider a general method of computing a standard error based on Monte Carlo simulation.

Monte Carlo simulation was considered in Section 6.7. The approach used in that section was based on assumptions regarding the distribution of the data. One drawback of such an approach is that the results depend on the assumptions used; here we use the observed data as the basis for the simulated data so that no such distributional assumptions are required.

We begin with a simple example to illustrate the mechanics of the Monte Carlo procedure. Let Rj,1, Rj,2, . . . , Rj,T denote the returns on a given asset and consider calculating the standard error of the sample mean return ¯Rj. Of course, in this case, we know that this standard error is given by Sj/√

T where Sj is the sample standard deviation of the returns. However, suppose that such a formula for this standard error is not available.

The standard error of ¯Rj is an estimate of the standard deviation of the sampling distribution of ¯Rj. This sampling distribution is based on the assumption thatRj,1, Rj,2, . . . , Rj,Tis a random sample from some population, in this case, the (hypothetical) population of all possible returns on the asset under consideration. Hence, here we assume that the Rj,t are independent, identically distributed random variables.

Suppose that the population values are known; denote them by ˜Rj,k, k= 1,2, . . . , K, so that the population of return values may be written

P ={R˜j,1,R˜j,2, . . . ,R˜j,K}.

Then we can calculate the standard deviation of ¯Rj by drawing repeated samples of size T from P, calculating the sample mean of each sample and then calculating the standard deviation of these simulated sample means of returns.

That is, for eachi= 1,2, . . . , I, let

R(i)j,1, R(i)j,2, . . . , R(i)j,T

be a sample with replacement drawn from P, and let ¯R(i)j =T

t=1R(i)j /T denote the corresponding sample mean. Then

R¯(1)j ,R¯j(2), . . . ,R¯(I)j is a sample from the distribution of ¯Rj.

The standard error of ¯Rj may now be calculated as the sample standard deviation of the values ¯R(i)j ,i= 1,2, . . . , I. Provided thatIis sufficiently large, that is, provided that we draw a sufficiently large number of samples fromP, the standard error calculated in this way will be an accurate estimator of the standard deviation of the sampling distribution of ¯Rj.

The flaw in this approach, of course, is that we do not have the popula- tionP. In Section 6.7, we dealt with this issue by making some assumptions regarding the distribution of the data and used those assumptions to draw the Monte Carlo samples. Here we use the information regardingP provided by the observed valuesRj,1, Rj,2, . . . , Rj,T themselves.

Let PO ={Rj,1, Rj,2, . . . , Rj,T} denote the set of observed return values.

Then we may estimate the standard error of ¯Rj by replacing the hypothetical populationP of return values by the observed set of return values,PO. This procedure is known as thebootstrapbecause we are apparently estimating the standard error by “pulling ourselves up by our bootstraps”; that is, we esti- mate the standard error without any assistance in the form of distributional assumptions.

Example 8.19 Consider estimating the mean return on the New Horizons Fund; recall that five years of monthly returns on this mutual fund are stored in the variableprnhx. For illustration, we will use only the first five of these values, which are stored inprnhx5:

> prnhx5<-prnhx[1:5]

> prnhx5

[1] -0.0325 0.0452 0.0830 0.0398 -0.0646

The sample mean of these five values is 0.0142 with a standard error of 0.0272:

> mean(prnhx5) [1] 0.0142

> sd(prnhx5)/(5^.5) [1] 0.0272

Now consider the simulation-based approach to estimating this standard error. The functionsamplemay be used to draw a random sample from a set of integers. Specifically,sample(5, replace=T)draws a random sample with replacement from the set{1,2,3,4,5}:

> samp<-sample(5, replace=T)

> samp

[1] 2 1 1 5 3

Using the sampled integers as the indices of the vectorprnhx5yields a random sample with replacement from the set of returns values inprnhx5:

> prnhx5[samp]

[1] 0.0452 -0.0325 -0.0325 -0.0646 0.0830

The sample mean of prnhx5[samp]yields a simulated value of the sample mean return ¯Rj for this asset:

> mean(prnhx5[samp]) [1] -0.0003

This procedure may be repeated multiple times; for example,

> mean(prnhx5[sample(5, replace=T)]) [1] 0.0286

> mean(prnhx5[sample(5, replace=T)]) [1] 0.0142

and so on.

Note that each time mean(prnhx5[sample(5, replace=T)]) is calcu- lated, a new set of random numbers is drawn. Suppose we perform this procedure 1000 times, storing the sample means in the variableprnhx5.boot, these values represent a type of random sample drawn from the distribution of the sample mean of five returns on the New Horizons Fund. Here are the first eight values.

> prnhx5.boot[1:8]

[1] -0.0003 0.02860 0.01420 0.05270 -0.15400 0.03400 [7] 0.01523 0.00447

The sample standard deviation of prnhx5.boot yields an estimate of the standard error of ¯Rj for this fund.

> sd(prnhx5.boot) [1] 0.0247

Note that the bootstrap standard error is close to, but not exactly the same as, the value given by the usual formula for the standard error of the sample mean, 0.0272. There are two reasons for the difference. One is that the sample standard deviation uses a divisor ofT−1; it may be shown that the estimate of the standard deviation implicity used by the bootstrap method is equivalent to the one with a divisor ofT. The effect of these differ- ent divisors is highlighted in the example because in that caseT = 5. In more realistic settings, such as the analysis of five years of monthly data, T = 60 and

60/59 = 1.0084 so that the difference is unlikely to be important.

The other reason for the difference between the bootstrap standard error and the usual value is that the bootstrap method is based on a random sample.

If the method is repeated, a different standard error will be obtained. For instance, the bootstrap method was repeated three times, with results

> sd(prnhx5.boot1) [1] 0.0237

> sd(prnhx5.boot2) [1] 0.0245

> sd(prnhx5.boot3) [1] 0.0246

If a very large bootstrap sample size is used, we expect that the result will be closer to that obtained by the usual method. For example,prnhx5.boot10k contains a random sample of size 10,000 from the distribution of ¯Rj for the New Horizons Fund.

> sd(prnhx.boot10k) [1] 0.0241

Hence, note that 5/4 .

= 1.118 so that, after accounting for the difference in divisors, the result is nearly identical to the 0.0272 obtained from the usual

formula.

Obviously, the bootstrap method is not needed to calculate the standard error of a sample mean. However, it is extremely useful for calculating the standard error of more complicated statistics for which a simple formula is not available. Although it is possible to carry out the calculations for any statistic by following the procedure described earlier for the sample mean, fortunately, there are convenient R functions available for that purpose.

Here we will use the functionbootin the packageboot(Canty and Ripley 2015). This function takes three arguments (there are other, optional argu- ments for which we will use the default values). The most important of these is a function calculating the statistic of interest for a given set of data.

Example 8.20 Consider estimation of the Sharpe ratio based on a sequence of excess returns; we will write a functionSharpeto compute the Sharpe ratio.

A function to be used inbootmust take two arguments: the data, in the form of a vector or matrix, and the indices of the data values to be used in the computation, similar to the way the indices vector was used in the aforementioned sample mean example.

Consider the function

> Sharpe<-function(x, ind){mean(x[ind])/sd(x[ind])}

This function takes the values inxcorresponding to the indices in the vector indand uses those values to compute the Sharpe ratio. For example, consider the excess return data in the vectorprnhx. To use all the data, we setindto 1:60, the vector of integers from 1 to 60.

> Sharpe(prnhx, 1:60) [1] 0.367

This yields the same result as computing the Sharpe ratio directly for the data inprnhx:

> mean(prnhx)/sd(prnhx) [1] 0.367

If1:60is replaced by1:5, the result is the Sharpe ratio based on the first five values; recall that these are stored in the variableprnhx5.

> Sharpe(prnhx, 1:5) [1] 0.233

> mean(prnhx5)/sd(prnhx5) [1] 0.233

The other arguments to boot aredata, the data used in calculating the statistic of interest, andR, the number of bootstrap replications to be used.

Example 8.21 Consider estimation of the Sharpe ratio for the New Horizons Fund. To calculate the standard error of the estimated Sharpe ratio for the data inprnhxbased on a bootstrap sample size of 1000, we use the command

>library(boot)

> boot(prnhx, Sharpe, 1000)

ORDINAR{Y} NONPARAMETRIC BOOTSTRAP Bootstrap Statistics :

original bias std. error t1* 0.367 0.00582 0.138

The output gives the value of the estimate, under the heading “origi- nal”; hence, the estimated Sharpe ratio for these data is 0.367, as calculated previously. The standard error, given under the “std. error” heading, is 0.138.

The output of the bootfunction includes an estimate of the bias of the estimator; recall that the bias is the expected value of the estimator minus the true value of the parameter.

A bias-corrected estimate may be formed by subtracting the bias from the estimate; for example, a bias-corrected estimate of the Sharpe ratio for the New Horizons Fund is 0.3670.006 = 0.361. Whenever the bias is small relative to the standard error, the impact of the bias correction is small and, hence, it may be ignored. A simple rule of thumb is that the estimated bias may be ignored when it is less than one-fourth of the standard error; of course, such a guideline will not be appropriate in all cases.

The usefulness of the bootstrap method arises from the fact that it can be applied to a wide range of statistics, by modifying the function used as the argument toboot. For instance, it may be applied when the statistic under consideration depends on the returns of more than one asset; this is illustrated in the following example.

Example 8.22 Consider calculation of the standard error of the estimated Treynor ratio for the New Horizons Fund.

The first step in using the bootstrap method is to construct a function that calculates the estimated Treynor ratio based on a set of returns, together with a vector of indices. The complication in this example is that the Treynor ratio depends on two sets of returns—the returns on the asset under consideration and the returns on the market index, used to calculate ˆβp. Hence, we take the data for estimation procedure to be a matrix, in which the first column is the asset (excess) returns and the second column is the (excess) returns on market index, in this case, the S&P 500 index. The indices variable will then select the rows of the matrix to be included in the estimate. This approach is implemented in the following function:

> Treynor<-function(rmat, ind){

+ ret<-rmat[ind, 1]

+ mkt<-rmat[ind, 2]

+ beta<-lm(ret~mkt)$coefficients[2]

+ mean(ret)/beta}

In this function, the return data are input in the matrix rmat, which is assumed to have two columns, the first with the return data for the asset and the second with the return data for the market index. The variableind contains the indices of the returns to be used in the estimation of the Treynor ratio. The first two lines of the function extract the relevant return data using indand places them in two variablesretandmkt, which contain the return data for the asset and for the market index, respectively, corresponding to ind. The third line obtains the estimate of beta for the data inretandmkt, and the final line returns the estimate of the Treynor ratio.

For example, using the function with the first argument taken to be cbind(prnhx, sp500), the matrix formed by combining prnhx and sp500 as column vectors, and taking the second argument to be the sequence of integers1:60, yields the estimated Treynor ratio for the New Horizons Fund

> Treynor(cbind(prnhx, sp500), 1:60) 0.01550

in agreement with what was obtained in Example 8.15.

We are now in position to apply thebootfunction. Suppose we are inter- ested in estimating the standard error ofTR for the data in the variable% prnhx.

This is obtained from the command

> boot(cbind(prnhx, sp500), Treynor, 10000) ORDINAR{Y} NONPARAMETRIC BOOTSTRAP

Bootstrap Statistics :

original bias std. error t1* 0.0155 -1.21e-05 0.00534

Therefore, the standard error for the estimated Treynor ratio for the New Horizons Fund is 0.00534; the estimated bias is very small relative to the

standard error and, hence, it may be ignored.

Comparison of Portfolios

A common goal in calculating measures of portfolio performance is to compare portfolios. Hence, we may be interested in estimating the difference between measures of performance for two portfolios. Estimation of such a difference is straightforward. We may estimate the difference in performance measures by the difference of corresponding estimates. To calculate the standard error of such a difference, we may again use the bootstrap procedure, as implemented in the bootfunction, by defining the inputs to boot appropriately. This is illustrated in the following example.

Example 8.23 Suppose that we are interested in comparing the Sharpe ratios of the U.S. Growth Portfolio and the New Horizons Fund, the esti- mated Sharpe ratios are 0.287 for the U.S. Growth Portfolio and 0.367 for the New Horizons Fund, suggesting that the Sharpe ratio for the New Hori- zons Fund is larger. However, these are only estimates and it is of interest to take into account the sampling variability in evaluating the difference in the estimates.

First, consider calculation of the standard error for each of these individ- ual estimates. The return data for the U.S. Growth Portfolio are stored in the variable vwusx and the returns for the New Horizons Fund are stored in the variable prnhx. Recall that Sharpe is a function that calculates the Sharpe ratio of an asset, which can be used in the functionboot.

The standard errors for the individual Sharpe ratios are given by

> boot(vwusx, Sharpe, 10000) ORDINAR{Y} NONPARAMETRIC BOOTSTRAP Bootstrap Statistics :

original bias std. error t1* 0.287 0.00786 0.141

> boot(prnhx, Sharpe, 10000) ORDINAR{Y} NONPARAMETRIC BOOTSTRAP Bootstrap Statistics :

original bias std. error t1* 0.367 0.00751 0.137

Thus, the standard error of the estimated Sharpe ratio for the U.S. Growth Portfolio is 0.141 and the standard error of the estimated Sharpe ratio for the New Horizons Fund is 0.137.

Now consider calculation of the standard error for the difference in two estimated Sharpe ratios. Note that we cannot use standard error based on the

individual standard errors because the two estimates are likely to be corre- lated. Hence, we use an approach similar to that used when calculating the standard error for the difference of means for matched-pair data.

Define a function Sharpe_diffby

> Sharpe_diff<-function(rets, ind){

+ Sharpe(rets[,1], ind)-Sharpe(rets[,2], ind) + }

This function takes the return data in the matrixrets, with the returns for the first asset in column 1 and the returns for the second asset in column 2, and computes the difference in the Sharpe ratios corresponding to the indices inind.

To form a matrix with columns given by the variablesvwusxandprhnx, we use thecbindfunction. Therefore, standard error of the difference in Sharpe ratios is given by

> boot(cbind(vwusx, prnhx), Sharpe_diff, 10000) ORDINAR{Y} NONPARAMETRIC BOOTSTRAP

Bootstrap Statistics :

original bias std. error t1* -0.0795 0.00208 0.0589

Note that the estimated difference, 0.0795, agrees with the difference in the estimated Sharpe ratios calculated in Example 8.15, 0.28700.3665. The standard error of the difference is 0.0589 and, hence, the difference is not statistically significant at the 5% level; that is, a 95% confidence interval for the difference includes zero. The estimated bias is small relative to the standard error and may be ignored.

It is worth noting that the standard error of the difference of the esti- mates based on the standard errors of the individual estimates along with the assumption that the estimates are uncorrelated,

(0.141)2+ (0.137)212

= 0.197

is much larger than the estimate given previously. This is because the method used to obtain the value 0.197 ignores the fact that the returns on the two

funds are correlated.

It is important to keep in mind that the results based on the bootstrap method are based on the random numbers generated by the bootfunction.

Hence, if the procedure is repeated, the results will vary. It is generally a good idea to repeat the standard error calculation in order to assess this variation; if it is large enough to affect the conclusions of the analysis, then the bootstrap sample size should be increased.

Example 8.24 Consider calculation of the standard error of the difference of the Sharpe ratios for the U.S. Growth Portfolio and the New Horizons Fund.

Recall that in Example 8.23, the standard error was found to be 0.0589.

Repeating the calculation twice yields

> boot(cbind(vwusx, prnhx), Sharpe_diff, 10000) ORDINAR{Y} NONPARAMETRIC BOOTSTRAP

Bootstrap Statistics :

original bias std. error t1* -0.0795 0.000631 0.0596

> boot(cbind(vwusx, prnhx), Sharpe_diff, 10000) ORDINAR{Y} NONPARAMETRIC BOOTSTRAP

Bootstrap Statistics :

original bias std. error t1* -0.0795 0.000612 0.0591

Clearly, the variation in the estimated standard error is fairly small and the conclusion that the estimated difference of the Sharpe ratios is not statis- tically significant is not affected. Although the estimated biases obtained indicate some variation, it clearly does not affect the conclusion that the bias is negligible.

Given these additional results, it is reasonable to include them in our cal- culation of the standard error. For instance, to estimate the standard error, we could average the values 0.0589,0.0596,and 0.0591, leading to a new estimate

of 0.0592.

Một phần của tài liệu Introduction to statistical methods for financial models (Trang 276 - 285)

Tải bản đầy đủ (PDF)

(387 trang)