Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 32 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
32
Dung lượng
307,12 KB
Nội dung
1521 THE ACCOUNTING REVIEW American Accounting Association Vol. 84, No. 5 DOI: 10.2308/accr.2009.84.5.1521 2009 pp. 1521–1552 Big 4 Office Size and Audit Quality Jere R. Francis University of Missouri–Columbia Michael D. Yu Washington State University ABSTRACT: Larger offices of Big 4 auditors are predicted to have higher quality audits for SEC registrants due to greater in-house experience in administering such au- dits. We test this prediction by examining a sample of 6,568 U.S. firm-year observations for the period 2003–2005 and audited by 285 unique Big 4 offices. Results are con- sistent with larger offices providing higher quality audits. Specifically, larger offices are more likely to issue going-concern audit reports, and clients in larger offices evidence less aggressive earnings management behavior. These findings are robust to extensive controls for client risk factors and to controls for other auditor characteristics. While the evidence suggests audit quality is higher on average in larger Big 4 offices, we make no claims that audit quality is unacceptably low in smaller offices. Keywords: audit quality; Big 4 accounting firms; earnings quality; accruals; earnings benchmarks; going-concern audit reports. Data Availability: Data used in this study are available from public sources identified in the paper. I. INTRODUCTION T his study extends recent research analyzing the effects of client influence and auditor industry expertise in individual practice offices of Big 4 accounting firms (Reynolds and Francis 2000; Craswell et al. 2002; Ferguson et al. 2003), and investigates a fundamental question that has not been addressed in prior studies: Is Big 4 audit quality uniform across small and large practice offices? Our prediction is that audits are of higher quality in larger Big 4 offices because auditors in these offices have more collective ex- perience in administering the audits of public companies (SEC registrants). Thus, a large We thank the editors, Steve Kachelmeier and Dan Dhaliwal, and the two anonymous referees for their many constructive suggestions. We also appreciate feedback on earlier versions of the study presented at the 2007 American Accounting Association Annual Meeting, the 2007 European Audit Research Network Symposium, and workshops at University of Auckland, Bond University, University of Colorado, Indiana University, University of Missouri–Columbia, University of Melbourne, Tilburg University, Washington State University, and Yale University, and especially the comments of Paul Brockman, Inder Khurana, Elaine Mauldin, Raynolde Pereira, Kenny Reynolds, Phil Shane, Stephen Taylor, and Marlene Willekens. This study is supported by a grant from the PwC INQuires research program of PricewaterhouseCoopers. Editor’s note: Accepted by Steven Kachelmeier, with thanks to Dan Dhaliwal for serving as editor on a previous version. Submitted: August 2007 Accepted: November 2008 Published Online: September 2009 1522 Francis and Yu The Accounting Review September 2009 American Accounting Association office will have greater in-house expertise in detecting material problems in the financial statements of SEC clients. By implication, auditors in smaller Big 4 offices have less experience and therefore develop less skill in detecting such problems. To test the relation between Big 4 office size and audit quality, we examine the asso- ciation of office size with going-concern audit reports and client earnings properties (ab- normal accruals and earnings benchmark tests). Big 4 office size is measured by the fees received from SEC registrants, and the results are robust to alternative measures using total audit fees (audit plus nonaudit), and ranks of fees. Importantly, the models include extensive controls to assure that office size is not capturing the effects of omitted client risk factors or auditor characteristics such as tenure and industry expertise, although we cannot entirely rule this out. The models are also estimated as fixed- or random-effect models as an ad- ditional control for omitted variables. We find that larger offices are more likely to issue going-concern reports, and that their going-concern reports are more accurate in terms of predicting next-period client bank- ruptcy. Clients audited by larger offices are also less likely to have aggressively managed earnings as evidenced by smaller abnormal accruals and a lower likelihood of meeting benchmark earnings targets (small profits and small earnings increases). Overall, these re- sults reinforce the importance of the local office unit of analysis in audit research and show there is significant variation in audit outcomes across Big 4 offices, with the evidence consistent with the premise that larger offices provide higher quality audits. As reported in Section V, however, the results are less robust when office size is based on the number of SEC clients rather than total office fees. 1 The study is subject to the following caveats. First, our evidence does not indicate small offices fail to meet minimum standards of audit quality; however, the findings do point to systematically higher quality by larger offices relative to smaller offices of Big 4 accounting firms. Second, the analysis is based on public company (SEC) clienteles, and hence the knowledge and expertise analyzed in the study is an office’s expertise in dealing with SEC registrants. The analysis of private company clienteles is beyond the scope of this study, and cannot be undertaken due to the lack of publicly available data. A third caveat is that audits are wholly attributed to the engagement office of record based on the audit report filed with the SEC. We recognize that multiple offices of a Big 4 firm may contribute to an audit engagement, although this is not determinable with publicly available data. However, the engagement office that contracts with the client has primary responsibility for the audit, including overseeing work performed by other offices. Thus, the engagement office’s audit team makes critical judgments on audits, and of course the engagement partner issues the final audit report on engagement office letterhead. Therefore, it is reasonable to attribute the audit entirely to the engagement office for the purpose of our study, even though other offices may participate in the audit (albeit with oversight by the engagement office). In a practical sense, the extent to which small offices participate on audit engage- ments of large offices (and vice versa) would only neutralize office size differences and, therefore, should work against the predicted office size/audit quality relation. The next section develops the study’s hypothesis and explains why an office-level analysis is important. Section III presents the research design, sample selection, and de- scriptive statistics. Section IV discusses the primary empirical results, and Section V reports sensitivity tests and robustness checks. Section VI concludes the study. 1 A concurrent study by Choi et al. (2007) uses a different design and sample, but also reports a negative association between office size and absolute abnormal accruals. Big 4 Office Size and Audit Quality 1523 The Accounting Review September 2009 American Accounting Association II. BACKGROUND AND HYPOTHESIS DEVELOPMENT Wallman (1996) and Francis, Stokes and Anderson (1999) argue that local practice offices are the primary decision-making unit within Big 4 auditing firms and, therefore, an important unit of analysis in audit research. Big 4 firms have decentralized organizations and operate through a network of semi-autonomous practice offices. Local offices contract with clients, administer audit engagements, and issue audit reports signed on the local office letterhead. Accounting professionals are typically based in specific practice offices and audit clients in the same geographic locale; hence, their expertise and knowledge is both office- and client-specific (Francis, Stokes, and Andersen 1999; Ferguson et al. 2003). This de- centralized office structure reduces information asymmetry and enables Big 4 auditors to develop better knowledge of existing and potential clients in a particular location. Clients, in turn, have greater knowledge of and confidence in the expertise of locally based personnel who actually perform audits (Carcello et al. 1992). The above argument assumes that Big 4 firms are unable to fully achieve uniform audit quality across offices, and that a certain amount of overall audit expertise is office-specific (Francis et al. 2005; Vera-Mun˜oz et al. 2006). Given the above discussion, our argument is that a large office has more ‘‘in-house’’ experience in dealing with public companies (SEC registrants) and, hence, more collective human capital in the office. Experience is an important dimension of human capital (Becker 1993), and a larger office with more engagement hours therefore provides its auditors with greater opportunities to acquire expertise in detecting material problems in the financial statements of SEC registrants. As a consequence, auditors in larger offices are more likely to detect and report material problems in the financial statements, or require clients to correct the statements before issuance. 2 Auditors working in a large office will have more peers with whom to consult and, hence, have a better local support network. Danos et al. (1989) report that auditors are most likely to consult their peers within the same office when problems arise rather than broader consultation with colleagues in other offices or the national office. It follows that larger offices also have the potential to produce higher quality audits because of their greater in- house networking/consultation opportunities. We acknowledge that in the post-SOX era there may be more firm-wide consultations to facilitate better quality audits and, to the extent this is the case, it would work against the predicted office-size effect. Based on the above discussion, we believe audit quality is not uniform across Big 4 offices, and the study’s hypothesis in alternative form is: Larger offices of Big 4 accounting firms provide higher quality audits, where higher quality audits are inferred by the auditor’s likelihood of issuing a going-concern audit report (and accuracy of the report in predicting client bankruptcy), and the degree to which clients evidence earnings management behavior. 2 A secondary argument is that larger offices also have deeper reserves of personnel (slack) to mitigate the effects of high employee turnover in the public accounting industry. Satava (2003) reports that the large national accounting firms have a turnover rate of around 25 percent or the loss of one in four employees annually. Auditor turnover results in the loss of auditor expertise and knowledge, and especially the specific knowledge between an auditor and a client. However, because a large office has a bigger pool of employees, it is better able to replace audit team members with experienced auditors. The same logic applies to the mandatory rotation of engagement partners and concurring review partners. A larger office has a deeper reserve of partner expertise to draw on when mandatory rotation occurs and new partners must be assigned to clients. Hence, in a large office, there is more likely to be continuity in the office’s expertise in administering SEC audit engagements from one period to another and from one audit team to another. 1524 Francis and Yu The Accounting Review September 2009 American Accounting Association The null hypothesis is that audit quality is uniform across office size. While we do not expect this to be the case (and the evidence does not support that it is), it cannot be ruled out a priori. The Big 4 firms are organized as national partnerships with national admin- istrative offices that set firm-wide policies and provide technical support for their city-based practice offices. Under this alternative view of the audit firm, audit expertise and knowledge can be captured by the firm as a whole and distributed uniformly across offices. This view is supported by the fact that the Big 4 firms have national training programs, standardized audit programs, and firm-wide knowledge-sharing practices supported by information tech- nology. Auditors travel, to some extent, between offices, and may also be reassigned to other offices, both of which can spread expertise across offices. However, Vera-Mun˜oz et al. (2006) point out that firm-wide knowledge sharing has practical limitations, and for this reason it is an open empirical question as to what extent these firm-wide mechanisms can effectively mitigate the hypothesized office-size effect on Big 4 audit quality. Recent changes implemented by Sarbanes-Oxley, such as the annual inspections undertaken by the PCAOB, have created additional incentives for accounting firms to strengthen their internal procedures to ensure uniform audit quality across practice offices. To the extent that ac- counting firms have restructured their operations to improve and standardize firm-wide audit quality, this would work against finding the hypothesized office-size effect. III. RESEARCH DESIGN Audit quality is inferred by examining client earnings properties and implied earnings management behavior with respect to abnormal accruals and earnings benchmark targets (e.g., Becker et al. 1998; Frankel et al. 2002). Earnings management per se does not violate generally accepted accounting principles. However, firms that manage earnings are viewed as having lower quality earnings (e.g., Frankel et al. 2002), and Levitt (1998) suggests that aggressive earnings management can result in materially misleading financial reports. We test if client earnings metrics differ across office sizes of Big 4 firms. Specifically, we expect clients in larger offices will evidence less earnings management behavior (smaller abnormal accruals and less likelihood of meeting benchmark earnings targets) after con- trolling for client risk factors. The reason is that auditors in larger offices are expected to have more expertise in detecting and deterring aggressive earnings management behavior (Francis, Maydew and Sparks 1999). In addition, we test if an auditor’s propensity to issue a going-concern report (and the accuracy of going-concern reports in predicting client bankruptcy) is increasing in office size. Again, the conjecture is that auditors in larger offices have more expertise in identifying the circumstances that warrant a going-concern report. The auditor’s likelihood of issuing going-concern audit reports has been used in prior research to test for differential audit quality (Reynolds and Francis 2000; Craswell et al. 2002; DeFond et al. 2002). The specific office administering an audit engagement is identified from the letterhead of the audit report filed with the SEC, as reported in Audit Analytics. We use an office’s aggregate audit fees each year to measure office size using all observations in the Audit Analytics database with fee data. Audit fees are directly related to engagement hours, and offices with higher fees will therefore have more hours of experience in the audits of SEC registrants. The log of office fees (denoted lnOFFICE) is the functional form used in the multivariate analyses due to skewness in the distribution of office-level audit fees (see Table 1). Results are robust to using total fees (audit and nonaudit) to measure office size as in Craswell et al. (2002), and to using log of ranks where the 805 offices in the sample are rank-ordered from 1 to 805 based on their audit fees. Big 4 Office Size and Audit Quality 1525 The Accounting Review September 2009 American Accounting Association TABLE 1 Big 4 Accounting Firm Office Size Based on Pooled 2003–2005 Data in Audit Analytics a Panel A: Auditor Office Size for 805 Office Years in the Study Based on Audit Fee Revenues (in $ millions) b Auditor # of Office Years Mean Median Std. Dev. Min. Q1 Q3 Max. Aggregate Fees Deloitte & Touche LLP 186 27.69 12.325 49.85 0.17 2.89 35.14 439.93 5,149 Ernst & Young LLP 219 25.76 13.45 36.88 0.15 5.76 29.41 270.28 5,641 KPMG LLP 220 20.97 7.335 36.76 0.2 3.55 21.65 293.24 4,614 PricewaterhouseCoopers LLP 180 42.19 14.945 79.96 0.06 5.63 45.185 623.53 7,594 Panel B: Auditor Office Size for 805 Office Years in the Study Based on Number of Clients c Auditor # of Office Years Mean Median Std. Dev. Min. Q1 Q3 Max. Aggregate Clients Deloitte & Touche LLP 186 36.79 15 82.76 1 5 42 764 6,843 Ernst & Young LLP 219 33.6 15 58.48 1 8 30 352 7,359 KPMG LLP 220 24 11 38.93 1 5 27 321 5,280 PricewaterhouseCoopers LLP 180 42.47 16 78.22 1 7 31.5 503 7,645 a This table provides office size descriptive statistics for the 285 unique U.S. Big 4 offices in the study. The 285 offices are distributed as follows: Deloitte (65), Ernst & Young (78), KPMG (77), and PricewaterhouseCoopers (65). Each office can appear up to three times over the three-year sample period (2003–2005), and there are 805 ‘‘office years’’ in total. The data in this table are based on 27,127 firm-year (client) observations of Big 4 auditors and represent all Audit Analytics observations located in the U.S. with audit fee data for fiscal years 2003 through 2005, using the Compustat year convention. b Panel A reports summary statistics based on office-level audit fee revenues (in $ millions), per office-year. c Panel B reports summary statistics based on the number of clients audited by each office, per office-year. 1526 Francis and Yu The Accounting Review September 2009 American Accounting Association Up to this point, the auditor’s incentives have not been explicitly considered in the analysis. Prior research argues that auditors may acquiesce and report favorably in order to retain influential clients, particularly if a client is large relative to the size of the engagement office (Reynolds and Francis 2000). The proposition that auditors report favorably to retain important clients is known as economic bonding (DeAngelo 1981). Following Craswell et al. (2002), we measure the auditor’s incentives with respect to a client’s influence (INFLUENCE) on the local office as the ratio of the client’s fees for all services, to the sum of fees for all clients of the engagement office for a given year. While economic bonding implies the impairment of auditor independence, Reynolds and Francis (2000) report evidence of the opposite, namely, that Big 4 auditors report more conservatively for larger influential clients in engagement offices. The explanation in Reynolds and Francis (2000) is that the auditor’s incentive to avoid costly litigation from misreporting by im- portant clients is stronger than the incentive to acquiesce and report favorably. We make no directional prediction in this study, but include INFLUENCE to control for the auditor’s office-level incentives with respect to influential clients. It is also important to note that INFLUENCE measures a client’s size relative to an office, and that it is distinctly different from both absolute client size and absolute office size. A client of a given absolute size (e.g., fees of $1 million) could be a relatively large or small client depending on the absolute size of the engagement office. Two other important auditor characteristics are controlled for in all models to assure that the results for office size are not the consequence of correlated omitted auditor varia- bles. First, we control for auditor tenure because Johnson et al. (2002) find that short auditor tenure is associated with lower client earnings quality. Following Johnson et al. (2002), we include the variable TENURE, which is coded 1 if tenure is three years or less, and 0 otherwise, to assure that office size is not in some way confounded by systematic differences in auditor tenure across practice offices. Second, we control for the auditor’s industry ex- pertise to assure that office size is not capturing an omitted variable with respect to the auditor’s industry expertise. Prior studies argue that industry expertise increases audit qual- ity (Balsam et al. 2003), and two-digit SIC codes are used to calculate industry-expertise measures at the national level (based on all clients of auditors) and office level (based on city-specific clienteles of auditors). As in Francis et al. (2005), national industry expertise is an indicator variable that is coded 1 if the auditor is the national audit fee leader (NATIONAL-LEADER), and an office is classified as an industry expert if it is the city- specific industry fee leader (CITY-LEADER). The results are unchanged if an auditor’s actual national- and city-level market shares of fees are used in lieu of indicator variables for industry leadership. The final two control variables used in all models are the number of client operating segments (OPSEG) and number of geographic segments (GEOSEG) as reported in Com- pustat. If no segment data is reported in Compustat for a given observation, then we assign a value of 1. The intuition for these two variables is that clients with multiple operating divisions or geographical segments are more likely to require the use of additional offices to assist the lead engagement office in completing the audit. Thus, the purpose of the segment variables is to give confidence that test results for the size of the primary engage- ment office are robust to control for the potential confounding effects of other offices that may participate in an audit engagement. We make no prediction on the sign, although it is Big 4 Office Size and Audit Quality 1527 The Accounting Review September 2009 American Accounting Association possible that the participation of multiple offices increases audit quality since there would be more offices involved in the engagement. 3 Accruals Quality The first dependent variable is abnormal accruals (Jones 1991). A large abnormal or discretionary component of accruals is indirect evidence of earnings management behavior and lower earnings quality. Kothari et al. (2005) argue that the discretionary accruals model might be misspecified when applied to samples of firms with extreme performance, and suggest that controlling for current firm performance will increase the power of the Jones model. We use ordinary least-squares (OLS) to estimate the following performance-adjusted Jones model for the full Compustat sample by fiscal year and two-digit industry SIC code (with a minimum of ten observations required for an industry to be included in a year), and controlling for concurrent firm performance with NI: TA ϭ ␣ ϩ ⌬REV ϩ  PPE ϩ  NI ϩ ε (1) 123 where TA is total accruals; ⌬REV is revenues in year t less revenues in year tϪ1; PPE is gross property, plant, and equipment; and NI is operating income after depreciation. 4 All variables are deflated by lagged total assets. The absolute value of residuals from Equation (1) is used to measure discretionary accruals since individual firms may have incentives to manage earnings either up or down depending on particular circumstances (Warfield et al. 1995). However, it has been argued that auditors are more concerned with constraining income-increasing accruals (Becker et al. 1998). Therefore, as an additional analysis, ‘‘signed’’ accruals are also examined by partitioning observations into those with income- increasing and income-decreasing abnormal accruals. We use the following model adapted from Reynolds and Francis (2000) to test the relation between accruals and office size: ACCRUALS ϭ ϩ lnOFFICE ϩ XЈ ϩ ¨. (2) 01 OLS is used to estimate Equation (2), and we follow Newey and West (1987) to correct for heteroscedasticity and first-order autocorrelation (serial dependence). Results are robust 3 OPSEG and GEOSEG are insignificant in most of the tests. While these variables control for the effect of multiple operating and geographic segments on the study’s dependent variables, they do not directly test if the office-size effects are systematically different for firms with single segments versus multiple segments. To directly analyze this question we re-code OPSEG and GEOSEG as equal to 1 if a firm has a single operating or geographic segment, respectively; otherwise, the segments variables are re-coded to 0. We then use the re- coded segment variables in the models, along with the interaction of each segment variable with the test variable lnOFFICE. Results of this new model specification are as follows. In the accruals tests and the two benchmark earnings tests, lnOFFICE is significant at the 0.05 level or less in all tests, and the interaction terms are not significant at the 0.10 level, indicating that the results for office size are consistent for firms with single operating/ geographic segments and firms with multiple segments. For the going-concern tests, the results indicate that auditors in larger offices are more likely to issue going-concern reports, and this result is even stronger for clients with a single operating or geographic segment. 4 We use operating income after depreciation (Compustat data item 178) as a performance control because it excludes nonoperating income, special items, and other items that are of a more discretionary nature. Kothari et al. (2005) use income before extraordinary items (Compustat data item 18), and our results are robust to this alternative definition of income. As a further sensitivity we also test net income (Compustat data item 172), although this measure might be noisier since it includes both extraordinary and nonoperating items. When using net income the results are comparable for absolute and negative abnormal accruals, but positive abnormal accruals are not significant at the 0.10 level. 1528 Francis and Yu The Accounting Review September 2009 American Accounting Association to alternative estimations using firm fixed-effect models (to control for omitted variables) and linear mixed models with multilevel random effects. The dependent variable in Equation (2) is abnormal accruals (ACCRUALS) and is the residual of Equation (1) above. The test variable is office size (lnOFFICE) and is defined as the log of total office-specific audit fees of all clients per fiscal year. Since larger values of abnormal accruals imply more client discretion and lower earnings quality, we expect the coefficient on office size will be negative if auditors in larger offices allow their clients less discretion over the use accruals to manage earnings. X is a vector of control variables that includes INFLUENCE, TENURE, NATIONAL- LEADER, CITY-LEADER, OPSEG, and GEOSEG, which were discussed in the previous section. The remaining control variables represent an extensive set of client variables used in prior research, plus other variables to assure the effects of office size are not the con- sequence of omitted client risk factors. Becker et al. (1998) find that larger clients are more likely to have higher earnings quality, so we expect that absolute client size (SIZE), mea- sured as log of total assets ($ millions), will be negatively correlated with accruals. Menon and Williams (2004) find that sales growth (SALESGROWTH) is positively associated with abnormal accruals and we include the one-year growth rate in sales as a control. Based on the analysis in Hribar and Nichols (2007), we also control for the volatility of sales growth (SALESVOLATILITY) measured as the standard deviation of sales for the most recent three fiscal years. Dechow et al. (1995) show that operating cash flows (CFO) influence the magnitude of discretionary accruals, and we expect that higher operating cash flows are associated with lower discretionary accruals. In addition, Doyle et al. (2007) and Hribar and Nichols (2007) report a positive association between cash flow volatility and accruals, so we include volatility (CFOVOLATILTY) measured as the standard deviation of cash flows for the most recent three fiscal years. Doyle et al. (2007) find an association between internal control deficiencies (reported under Sarbanes-Oxley) and the contemporaneous quality of a firm’s earnings. To control for this we use the variable WEAKNESS, which is the number of material internal control weaknesses in a fiscal year as reported in the Audit Analytics database. The variable is coded 0 if an observation has no deficiencies reported in Audit Analytics. Three variables are included in the model to control for the affects of debt and financial distress: DEBT, LOSS, and BANKRUPTCY. DeFond and Jiambalvo (1994) argue that com- panies with more debt (DEBT) have greater incentives to use accruals to increase earnings due to debt covenant constraints, and predict that debt level should be positively correlated with discretionary accruals. Firms with negative earnings (LOSS) are also expected to have a negative association with accruals quality. The intuition is that firms that report losses have lower incentives to manage discretionary accruals than do firms that report positive earnings. As in Reynolds and Francis (2000), a summary measure of financial distress is also used based on the Altman bankruptcy model (BANKRUPTCY). Lower values indicate more financial distress so that a negative association is expected with accruals. 5 Following Matsumoto (2002) and Hribar and Nichols (2007), we include two market-based variables to control for market incentives: stock return volatility (VOLATILITY) and market-to-book ratio (MB), which is a proxy for risk and growth. Inclusion of these market-based variables is motivated by the fact that capital market pressure can influence earnings management behavior. Riskier firms and growth firms may have greater incentives to manage earnings 5 The following equation from Altman (1983) is used to calculate this measure: 0.717 * working capital/ total assets ϩ 0.847 * retained earnings/total assets ϩ 3.107 * earnings before interest and taxes/total assets ϩ 0.42 * book value of equity/ total liabilities ϩ 0.998 * sales/total assets. Big 4 Office Size and Audit Quality 1529 The Accounting Review September 2009 American Accounting Association in order to meet market expectations, and we expect both variables to be positively corre- lated with accruals. Benchmark Earnings Targets Earnings distributions have been used to test earnings quality and earnings management behavior. Prior studies conclude that firms are systematically managing earnings to meet benchmark targets because there are an abnormally high proportion of firms that just ‘‘meet or beat’’ benchmarks and an abnormally low proportion of firms just below bench- mark targets (Burgstahler and Dichev 1997; Degeorge et al. 1999). We use a probit model to test two common benchmarks: reporting small positive profits (avoiding losses), and reporting small positive earnings increases (avoiding earnings declines). Earnings are as- sumed to be of higher quality (less subject to earnings management) if a firm does not systematically meet benchmark earnings targets. The prediction is auditors in larger offices are more likely to detect and constrain aggressive earnings management and that clients in larger offices are therefore less likely to meet benchmark targets. A probit model is estimated for the pooled sample with clustered robust standard errors to correct for heteroscedasticity and serial dependence (Rogers 1993): PROBIT[BENCHMARK ϭ 1] ϭ ƒ( ϩ  lnOFFICE ϩ XЈ ϩ ε) (3) 01 where BENCHMARK is coded as 1 if a firm reports small positive earnings (or small earnings increase), and 0 otherwise. As a sensitivity analysis we also estimate a random effect probit model and the results are consistent with the model in Equation (3). 6 X is a vector of control variables that is the same as those in Equation (2) for abnormal accruals. To test the reporting of small profits, we classify a client as reporting small positive earnings if its net income deflated by lagged total assets is between 0 and 5 percent. Frankel et al. (2002) and Carey and Simnett (2006) use a cutoff value of 2 percent, and our results are robust to this smaller cutoff level, as well as intermediate cutoffs of 3 and 4 percent. To test small earnings increases, we classify a client as reporting a small earnings increase if the change in its net income deflated by lagged total assets is between 0 and 1.3 percent. Frankel et al. (2002), Ashbaugh et al. (2003), and Carey and Simnett (2006) use a slightly larger cutoff value of 2.0 percent, and our results are robust to this larger cutoff level, as well as cutoffs 1 and 1.5 percent. Going-Concern Audit Reports A probit model adapted from prior studies tests if the propensity to issue going-concern audit reports differs across office size (e.g., Reynolds and Francis 2000; Craswell et al. 2002; DeFond et al. 2002). If larger offices have more expertise, then they should be better able to identify going-concern problems and issue more timely going-concern reports. Hence, we predict that office size is positively associated with the probability of issuing going-concern reports. The following probit model is estimated for the pooled sample with clustered robust standard errors to correct for heteroscedasticity and serial dependence (Rogers 1993): PROBIT[GCREPORT ϭ 1] ϭ ƒ( ϩ  lnOFFICE ϩ XЈ ϩ ε) (4) 01 6 The standard procedure for cross-sectional panel data is to estimate a random-effect probit model that corrects for serial correlation as well as controls for omitted firm-level variables (Wooldridge 2002). 1530 Francis and Yu The Accounting Review September 2009 American Accounting Association where GCREPORT is a dichotomous variable that takes the value of 1 if a client receives a going-concern audit report, and 0 otherwise. The test variable is lnOFFICE, and X is a vector of control variables that includes INFLUENCE, TENURE, NATIONAL-LEADER, CITY-LEADER, OPSEG, and GEOSEG as in the accruals and earnings benchmark tests. Predicted signs on these control variables are opposite that in Equations (2) and (3) because a larger value of the dependent variable denotes higher quality audits. We also control for client risk factors that have specifically been shown in prior research to explain going-concern opinion reporting (Reynolds and Francis 2000; DeFond et al. 2002). The additional variables with the expected signs in parenthesis are SALESVOLATILITY ( ϩ), SIZE (Ϫ), CASH (Ϫ), PRIORGC (ϩ), REPORTLAG (ϩ), DEBT ( ϩ), LOSS (ϩ), LAG LOSS (ϩ), BANKRUPTCY (Ϫ), LAG RETURN (Ϫ), VOLATILITY ( ϩ), and MB (ϩ). SALESVOLATILITY is the standard deviation of the last three years’ sales and is expected to have a positive association with going-concern reports due to higher operating risk. SIZE is log of total assets of the client, and is expected to be negatively correlated with the dependent variable because larger clients have more resources to stave off bankruptcy and therefore are less likely to fail. CASH is a liquidity measure that is the sum of the firm’s cash and investment securities, scaled by total assets. A firm with more liquid assets has the resources to deal with financial difficulties; this variable is expected to be negatively associated with the probability of a going-concern opinion. We include a dummy variable PRIORGC, which takes the value of 1 if a company received a going- concern opinion in the previous period as companies are more likely to receive a going-concern report if they received a prior-year going-concern qualification (Reynolds and Francis 2000). 7 REPORTLAG is a timeliness variable measuring the number of days between the fiscal year-end and the earnings announcement date. Prior research finds that going-concern opinions are associated with longer reporting delays (Raghunandan and Rama 1995; Carcello et al. 1995; DeFond et al. 2002). DEBT is total liabilities deflated by total assets, and LOSS is a dummy variable that takes the value of 1 if the company has an operating loss in the current year. High-debt firms and firms reporting losses are more likely to fail and therefore more likely to receive going-concern reports. BANKRUPTCY is the Altman Z-score (Altman 1983), which measures the probability of bankruptcy. The market measures VOLATILITY and MB are positively associated with going-concern reports because riskier growth firms are more likely to fail, while firms with higher returns in the prior year (LAG RETURN) are more likely to be performing well and are less likely to fail. Sample Selection The sample covers the three-year period 2003 through 2005 based on Compustat year definitions. The Big 4 auditors are Deloitte, Ernst & Young, KPMG, and PricewaterhouseCoopers, and office size each year is based on aggregate yearly audit fees for each office in Audit Analytics. The engagement office is determined from the audit report letterhead in SEC filings as reported in Audit Analytics, and the full population of observations with audit fee data is used for the calculation of the fee-based measure of office size (before merging with Compustat). Auditor industry leadership is also based on the full Audit Analytics population with fee data. An audit firm (office) is denoted the national (city-specific) industry leader if it has the largest client audit fees for a specific 7 An alternative design is to examine first-time going-concern reports. There are 173 going-concern reports in our sample including 78 first-time reports. If we restrict the analysis to first-time reports, the power of the test is reduced due to small sample size and office size is significant at the 0.11 level (one-tailed). [...]... The effects of firm-wide and office-level industry expertise on audit pricing The Accounting Review 78: 42 9 44 8 Francis, J R., E Maydew, and H C Sparks 1999 The role of Big 6 auditors in the credible reporting of accruals Auditing: A Journal of Practice & Theory 18: 17– 34 ———, D J Stokes, and D J Anderson 1999 City markets as a unit of analysis in audit research and the re-examination of Big 6 market shares... Ϫ0.018 Ϫ0 .46 1* Ϫ0.155* Ϫ0.023 Ϫ0 .45 4* Ϫ0.0 24 Ϫ0.023 0.386* 0.058* Ϫ0.073* 0.021* Ϫ0.066* 0.061* 0.039* Ϫ0.003 0.070* Ϫ0.031 0.050* Ϫ0.005 Ϫ0. 041 * Ϫ0. 047 * 0.372* Ϫ0.1 54* Ϫ0. 540 * Ϫ0 .45 7* 0.028 Ϫ0. 342 * Ϫ0 .47 0* Ϫ0.532* Ϫ0.132* Ϫ0.031 0.125* 0.120* 0.069* 0.183* 0.123* 0.113 0.091* 0 .49 2* 0.381* 0.507* 0 .40 1* Ϫ0.086* Ϫ0.177* Ϫ0.029 0. 049 * Ϫ0.362* Ϫ0 .41 6* Ϫ0.196* 0.103* 0.075* Ϫ0.080* Ϫ0.0 34* 0.221* 0. 247 * 0.320*... September 2009 Big 4 Office Size and Audit Quality 1551 Denis, D J., and V T Mihov 2003 The choice among bank debt, non-bank private debt, and public debt: Evidence from new corporate borrowings Journal of Financial Economics 70: 3–28 Doyle, J., G Weili, and S McVay 2007 Accruals quality and internal control over financial reporting The Accounting Review 82: 1 141 –1170 Ferguson, A., J Francis, and D Stokes.. .Big 4 Office Size and Audit Quality 1531 industry (city-specific industry) in a fiscal year Industry leadership is based on two-digit SIC classification, which is also reported in Audit Analytics After merging the Audit Analytics sample with Compustat, we exclude non -Big 4 auditors and observations in Compustat with missing financial data In addition, the financial sector (SIC codes 60–69) and regulated... Coefficient Estimate p-value Coefficient Estimate p-value Ϫ Ϫ0.063 0.006 Ϫ0.039 0.171 0.053 Ϫ0. 042 0. 040 0.112 0.030 0.021 Ϫ0.800 0.120 Ϫ0.6 64 Ϫ0.231 0. 048 0.095 Ϫ2.021 Ϫ0.029 Ϫ0.007 Ϫ0. 148 0 .40 2 0 .41 1 0.502 0 .49 7 0.009 0. 044 0.275 0.000 0.010 0.000 0.001 0.122 0.186 0.000 0.151 0.983 0.000 0.335 6,568 2,572 0.118 0.722 0.278 0.3 54 0.057 Ϫ0. 045 0.039 0.109 0.029 0.009 Ϫ0.798 0.119 Ϫ0.660 Ϫ0.233 0. 045 0.092 Ϫ2.025... Accounting and Economics 24: 99–126 Carcello, J., R Hermanson, and N McGrath 1992 Audit quality attributes: The perceptions of audit partners, preparers, and financial statement users Auditing: A Journal of Practice & Theory 11: 1–15 ———, D Hermanson, and F Huss 1995 Temporal changes in bankruptcy-related reporting Auditing: A Journal of Practice & Theory 14: 133– 143 Carey, P., and R Simnett 2006 Audit partner... tenure and audit quality The Accounting Review 81: 653–676 Choi, J., F Kim, J Kim, and Y Zang 2007 Audit office size, audit quality and audit pricing Working paper, Seoul National University, City University of Hong Kong, Hong Kong Polytechnic University, and Singapore Management University Craswell, A T., D J Stokes, and J Laughton 2002 Auditor independence and fee dependence Journal of Accounting and. .. Ϫ0.076* Ϫ0.028 0.008 Ϫ0.0 94* Ϫ0.088* Ϫ0.655* Ϫ0.070* Ϫ0.025 Ϫ0 .45 2* 0.0 84* 0. 046 Ϫ0.016 Ϫ0.039 0.173 0.105* Ϫ0.050* Ϫ0.061* Ϫ0.072* Ϫ0.106* 0.030 0.056* 0.025 0. 247 * 0. 045 * Ϫ0.089* Ϫ0. 245 0.015 Ϫ0.071* Ϫ0.001 0.067* 0.026 0.085* 0.003* 0.3 54* 0.075* 0.282* Ϫ0.235* Ϫ0.259* Ϫ0.383* 0.158* 0.038* 0.062* 0 .44 1* 0 .42 8* 0 .45 7* Ϫ0.211* 0.022 Ϫ0 .49 0* Ϫ0.288* 0.010 0.153* 0 .49 3* 0 .48 6* 0.176* Ϫ0.069* Ϫ0.091*... NATIONAL-LEADER CITY-LEADER n Mean Median Std Dev Min Max 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 6,568 65 .41 5 17. 346 0.009 0.258 0.233 0.129 0.026 0.026 0.065 0.131 1.100 2 .41 5 0.0 64 0.2 64 0.173 0 .47 6 0. 242 0.265 0.895 0.2 04 6.037 0 .48 3 2. 341 48 . 141 0.269 0.136 1.367 0.297 0. 640 ... September 2009 1 545 Big 4 Office Size and Audit Quality TABLE 6 (continued) Independent Variables DEBT LOSS LAGLOSS BANKRUPTCY LAGRETURN VOLATILITY MB Constant n Number of Unique Firms Pseudo R2 Percent Concordant Percent Discordant Predicted Sign Coefficient Estimate ϩ ϩ ϩ Ϫ Ϫ ϩ ϩ Ϫ0.103 Ϫ0.1 64 0.072 Ϫ0.031 0.099 1.732 0. 040 Ϫ2.825 p-value 0.113 0.319 0. 644 0.151 0.001 0.000 0.009 0.000 2,022 1, 145 0.389 . Association Vol. 84, No. 5 DOI: 10.2308/accr .2009. 84. 5.1521 2009 pp. 1521–1552 Big 4 Office Size and Audit Quality Jere R. Francis University of Missouri–Columbia Michael D. Yu Washington State. sample size and office size is significant at the 0.11 level (one-tailed). Big 4 Office Size and Audit Quality 1531 The Accounting Review September 2009 American Accounting Association industry (city-specific. implication, auditors in smaller Big 4 offices have less experience and therefore develop less skill in detecting such problems. To test the relation between Big 4 office size and audit quality, we