Electronic copy available at: http://ssrn.com/abstract=2151686Electronic copy available at: http://ssrn.com/abstract=2151686
Compensation Incentives of Credit Rating Agencies and Predictability of Changes in Bond
Ratings and Financial Strength Ratings*
Andreas Milidonis
April 2013
Please address correspondence to:
Andreas Milidonis, Ph.D., Department of Accounting & Finance Faculty of Economics & Management University of Cyprus P.O. Box 20537 CY-1678 Nicosia, Cyprus Email: [email protected] Tel: +357 22 89 3626 Fax: +357 22 89 5030
* I would like to thank Massimo Guidolin, Irene Karamanou, Ty Leverty, Gunter Löffler, George Nishiotis, Konstantinos Stathopoulos and participants at the Annual Meeting of the American Risk and Insurance Meeting, the European Group of Risk and Insurance Economists and the workshop for Finance, Risk and Banking organized by the University of Ulm. I would also like to thank Egan-Jones Ratings for providing access to their ratings (1999-2007) William Beaver, Catherine Shakespeare and Mark Soliman for generously sending their data (1997-2002), and Maria Efthymiou for excellent research assistance. This paper has also benefited from a research grant by the University of Cyprus. All errors are mine.
Electronic copy available at: http://ssrn.com/abstract=2151686Electronic copy available at: http://ssrn.com/abstract=2151686
1
Compensation Incentives of Credit Rating Agencies and Predictability of Changes in Bond
Ratings and Financial Strength Ratings
Abstract
Over the past decade there has been mixed evidence on the lead-lag relation between issuer-
paid and investor-paid credit rating agencies. We investigate the lead-lag relationship for
changes in bond ratings (BRs) and financial strength ratings (FSRs), for the US insurance
industry, where FSRs impose market discipline. First, we find that changes in issuer-paid
BRs are led by changes in investor-paid BRs, even over a period that issuer-paid agencies
have improved their timeliness. Second, information flows in both directions between
changes in issuer-paid BRs and FSRs. Third, issuer-paid FSRs are predictable by investor-
paid BRs. Fourth, the lead effect of investor-paid downgrades is economically significant as
it is associated with an unconditional, post-event, thirty-day cumulative abnormal return of -
4%. This return is a result of investor-paid downgrades in BRs, which predict more
downgrades in the following ninety days (same period return of -11%).
Keywords: Credit Rating Agencies; Information Dissemination; Timeliness;
Predictability; Insurance.
JEL Code: G14; G22; G24.
2
1. Introduction
Information intermediaries such as credit rating agencies and their incentives play a valuable role
in the financial reporting environment given financial markets’ increased complexity and
interconnectedness (Beyer et al., 2010). Their role becomes even more significant in industries that are
more opaque, such as the insurance industry (Morgan, 2002), where investors, consumers but also
regulators use ratings as proxies for the financial vulnerability of rated entities (e.g., Grace et al., 2004;
Weiss and Chung, 2004; Pottier, 2007; Chen et al., 2008). The insurance industry is unique in several
aspects and should be examined in isolation from other industries with respect to the types of ratings that
characterize it, but also the timeliness of these ratings, for the following reasons: (a) the industry is
governed by a market discipline mechanism, which has been shown to be dependent on ratings
(Epermanis and Harrington, 2006), (b) in contrast to non-financial industries, it is rated on two major
types of risks (ability to repay corporate debt, and ability to repay promises on policyholders’ contracts),
(c) it is rated by the two types of credit rating agencies (issuer-paid and investor-paid), and (d) it has a
homogeneous regulatory structure, which is quite different from other non-financial industries.
Why are the different types of credit rating agencies important? Critics of issuer-paid rating
agencies (they are compensated by the companies they rate) argue that the potential conflict of interest
present in the compensation structure of issuer-paid agencies may distort the timely release of changes in
their ratings (e.g. White, 2010). In contrast, investor-paid agencies (collect subscription fees from
investors to rate third parties) strive to inform their subscribers of potential changes in the credit risk of
rated firms and hence have an incentive to supply the market with more timely ratings than their issuer-
paid competitors. Over the past decade there has been mixed evidence on the relative timeliness between
the two types of rating agencies. Specifically, Johnson (2004) and Beaver et al. (2006) show that changes
in bond ratings by investor-paid agencies lead those of issuer-paid agencies. On the other hand, Cheng
and Neamtiu (2009) show that issuer-paid agencies have improved their timeliness (relative to
themselves) for bond ratings that are close to default, while Berwart et al., (2012) show that issuer-paid
agencies have improved their bond ratings’ timeliness relative to those of investor-paid agencies.
3
What are the two major types of ratings published for the insurance industry? Credit rating
agencies issue bond ratings (BRs) to rate the credit risk inherent in insurance corporate bond obligations.
In addition, rating agencies issue financial strength ratings (FSRs), to rate the overall ability of insurance
firms to fulfill policyholder obligations.1 FSRs have been associated with market discipline in the
insurance industry (Epermanis and Harrington, 2006) and also significant market reactions with the
announcement of their changes (Halek and Eckles, 2010). Insurance firms typically solicit and pay for
FSRs from issuer-paid rating agencies. Most BRs are also produced by issuer-paid agencies, but some
BRs are published by investor-paid rating agencies. The relative timeliness of issuer- vs. investor-paid
agencies in the market for BRs has implications for FSRs as the insurance firms’ default and insolvency
risks are intimately linked (Pottier and Sommer, 1999; Grace et al., 2003).
[Figure 1]
In this paper, we examine the unexplored lead-lag relationship between BRs and FSRs for the US
insurance industry, across the two types of rating agencies. Then we examine the potential economic
value for investors who follow the announcements of changes in these ratings. Our sample comprises
issuer-paid and investor-paid rating agencies’ changes in BRs and FSRs for insurance companies from
1996-2007. The largest rating agencies by market share (Fitch, Moody’s and Standard & Poor’s) are
issuer-paid agencies that publish both BRs and FSRs. Investor-paid agencies (e.g., Egan Jones), on the
other hand, only produce BRs (Figure 1). We examine the lead-lag relation between types of ratings
within the same rating agency, and also across rating agencies using an ordered probit model that takes
into account both the level and the timing of upgrades and downgrades, following Güttler and
Wahrenburg (2007) and Alsakka and Gwilym (2010). Using event studies, we then examine if there are
market reactions in the post-announcement periods of the two types of rating agencies, especially the
leader in the lead-lag relations tested.
First, we revisit the lead-lag relationship of investor- and issuer-paid BRs but specifically for the
1 In 2008, the total book value of policyholder obligations and corporate debt obligations amounted to about 81% (total policy reserves of 60.6% and separate accounts liabilities of 20%) and 9.9%, respectively, of the total reported insurance liabilities for publicly traded firms (www.snl.com).
4
insurance industry, over a period with conflicting evidence on the direction of the lead-lag relationship for
all industries. We find that investor-paid rating agencies lead issuer-paid rating agencies in the market for
insurance BRs, even after the improvement in the timeliness of issuer-paid rating agencies around the
default barrier (Cheng and Neamtiu, 2009; Berwart et al., 2012).
Second, we test the lead-lag relationship between changes in BRs and changes in FSRs, within
the same rating agency. Our data shows that a significant percentage of changes in BRs and FSRs
happens on the same day. Using only the observations that do not happen on the same day, we conduct a
lead-lag analysis. We find that there is information spillover between the two types of ratings, that is,
information flows in both directions, for those changes in BRs and FSRs. This result, along with the fact
that several changes in BRs and FSRs happen on the same day, shows that the probability of default on
corporate debt and policyholder obligations are connected.
Our third research question follows from the previous two results. Since only issuer-paid rating
agencies produce FSRs, then since investor-paid rating agencies lead issuer-paid rating agencies in the
market for BRs, and since there is information flow between the two types of ratings, this leads us to ask:
do changes in BRs by investor-paid rating agencies lead changes in FSRs by issuer-paid rating agencies?
We find evidence that they do. This predictability is important because FSRs impose market discipline
(Epermanis and Harrington, 2006), and significant cumulative average abnormal returns (CAARs) are
associated with the changes in FSRs of issuer-paid rating agencies (Halek and Eckles, 2010).
Our fourth question is: can investors benefit from the documented predictability above? Using
short-term event study analysis, we first find that there are significant CAARs surrounding the
announcements of changes in BRs by both investor-paid and issuer-paid rating agencies, with the CAARs
associated with investor-paid agencies’ announcements being larger. Second, in line with prior literature
(Halek and Eckles, 2010), we confirm the presence of significant CAARs associated with the
announcements of changes in FSRs by issuer-paid agencies. In both cases (changes in BRs and changes in
FSRs) results are larger in magnitude for downgrades than upgrades. Third, we find that investors can
benefit from announcements of downgrades in BRs by the investor-paid rating agency, but not those
5
(either BRs or FSRs) of the issuer-paid rating agency. Specifically, we document a negative drift in the
thirty trading days following downgrades by the investor-paid agency, which amounts to a cumulative
abnormal return of -3.92%. Using identical methodology for the post-announcement periods of issuer-
paid rating agencies, we do not find any significant cumulative abnormal returns. Then, we isolate those
investor-paid downgrades followed by other downgrades in the subsequent ninety days, to show that the
overall cumulative abnormal return of -3.92% is driven by the leading (investor-paid) downgrades
(CAAR of -10.62%) and not the followers (no significant CAARs). Therefore, evidence suggests that the
post-downgrade drift is due to lead-effect of the investor-paid agency and not an under-reaction to bond
downgrades (Dichev and Piotroski, 2001).
This is the first study to show that changes in ratings published by investor-paid rating agencies
are not only predictive in the market for BRs, but they can also predict changes in FSRs, which serve as a
mechanism for market discipline in the insurance industry. Because changes in premiums (Epermanis and
Harrington, 2006) directly affect the demand for insurance products, markets appear to react with large
CAARs surrounding the announcement of BRs by investor-paid rating agencies long before issuer-paid
agencies confirm their competitors’ actions. Our results show that there are inefficiencies not only within
a group of information intermediaries (i.e. rating agencies) and the way they publish new information
(timeliness), but also in the way they categorize and assess seemingly different but related risks within a
firm (i.e. using different types of ratings).
The rest of this paper is organized as follows. In Section 2 we provide institutional background on
the two types of ratings and the two types of rating agencies, and we develop our hypotheses. In Section 3
we describe the empirical methodology and in Section 4 we describe the data. We present our results and
robustness checks in Section 5, and conclude in section 6.
2. Institutional Background, Sample Selection and Hypotheses
2.1. Financial Strength Ratings (FSRs)
Insurance firms manage a large portfolio of risks, which they collect mainly in the form of
premiums from policyholders and debt and equity from markets. Insurer financial strength ratings (FSRs)
6
represent rating agencies’ opinion about the probability that insurance firms will not be able to service
their contractual policyholder obligations. FSRs are commonly used as a marketing tool for consumers
since insurance brokers usually advise purchasing insurance only from insurance firms, which are rated
three notches above the “financially secure” threshold (Bradford, 2003).
FSRs impose market discipline in the insurance industry. Epermanis and Harrington (2006)
document contemporaneous and one-year-ahead declines in premiums paid by risk-sensitive consumers
with downgrades in FSRs. Prior research also finds evidence of a positive relation between FSRs and
both, insurance prices (Doherty and Tinic, 1981; Berger et al., 1992) and revisions in liability loss
reserves (Petroni et al., 2000), and a negative relation between firm risk (i.e., lower FSR), insurance stock
prices (Fenn and Cole, 1994) and the price of insurance (Sommer, 1996; Cummins and Danzon, 1997;
Phillips et al., 1998). However, while FSRs are used as proxies for the financial solvency of insurers by
regulators (Schwartz, 1994) and researchers (Adiel, 1996; Anthony and Petroni, 1997; Cummins and
Danzon, 1997; Pottier, 1998), regulation for insurance firms usually is not binding in terms of capital
requirements (e.g., Cummins et al., 1995; Grace et al., 1998) and hence FSRs remain an important
determinant of financial health for non-distressed insurance firms.
Pottier (1997) studies the determinants of FSRs, differences between rating agencies active in the
market for FSRs, as well as the decision of insurance firms to be rated, and Pottier (1998) shows that
downgrades in FSRs have some explanatory power for insolvencies in the life-insurance industry. Halek
and Eckles (2010) further show that investor reactions to announcements of changes in FSRs of publicly
traded insurance firms depend in part on the type of rating agency making the announcement, and Halek
and Eckles (2011) examine the determinants of CAARs using firm-specific characteristics of insurance
firms. To date, however, no study examines the relation between FSRs and BRs for insurance firms.
2.2. Bond Ratings (BRs)
BRs represent the opinion of rating agencies about the probability that debt-issuing firms will not
7
service their debt obligations (debt default risk is not incorporated in FSRs).2 The Securities and
Exchange Commission (SEC) monitors the investment decisions of institutional investors (i.e., banks,
mutual funds and insurance firms) using only BRs from Nationally Recognized Statistical Rating
Organizations (NRSROs). Because the SEC considers investments to be speculative if their respective BR
falls below the “investment grade” boundary set by rating agencies, and because institutions are required
to invest almost entirely in investment grade bonds (Kisgen and Strahan, 2010; Ellul et al., 2011), debt-
issuing insurance firms need to obtain a BR from at least one NRSRO to make their debt issues appealing
to institutional investors. The NRSRO certification was first introduced in 1975 and until 2001 it was only
granted to the largest three issuer-paid rating agencies by market share (Fitch Ratings, Moody’s Ratings
and Standard & Poor’s Ratings). These agencies are compensated based on the issuer-paid model.
According to this model, when a company wants to issue debt it will solicit a rating from one (or more) of
the NRSROs.3 Meetings between representatives of the rating agency and the debt-issuing firm before the
final publication of a rating are not uncommon, and the fee paid by the debt-issuing firm usually depends
on the firm’s size and the complexity of the issuance (Kliger and Sarig, 2000). As a result, ratings of these
rating agencies have a dual role as they are used not only for contracting (Asquith et al., 2005) and
monitoring purposes by the SEC, but also for valuation purposes by investors.
Weinstein (1977) and Pinches and Singleton (1978) were the first to study the information value
of changes in BRs and find no significant market reaction related to their announcement. The opposite
results obtain in more recent studies that consistently provide evidence of investor reactions to changes in
BRs (e.g., Holthausen and Leftwich, 1986; Hand et al., 1992; Goh and Ederington, 1993, 1999;
Ederington and Goh, 1998). In addition, researchers have studied the determinants of BRs (Kaplan and
Urwitz, 1979; Ederington, 1985; Cantor and Packer, 1997), their relation to bond yields (Kliger and Sarig,
2000; Tang, 2009; Kisgen and Strahan, 2010), their relation to accounting quality (e.g. Blume et al., 1998;
2 We use the term BRs to refer to the commonly used term “credit ratings”. BRs reflect on the creditworthiness of the firm while FSRs on the insolvency probability; a firm can have low creditworthiness but still be solvent. 3 In cases in which debt-issuing firms do not solicit a rating, a rating agency might still publish a rating, albeit based entirely on publicly available information.
8
Sengupta, 1998), their differences across agencies (Ederington, 1986), ratings shopping (Skreta and
Veldkamp, 2009), post-announcement drift (Dichev and Piotroski, 2001) and conflicts of interest
associated with ratings shopping (Bolton et al., 2012).
The timeliness of changes in BRs has received a lot of attention in the literature and has been
linked to heterogeneity in rating agencies’ compensation structure (Beaver et al., 2006; Milidonis and
Wang, 2007; Strobl and Xia, 2010; Bruno et al., 2012; Berwart et. al, 2012). Popular examples include
the delayed downgrades of Enron (December 2, 2001) and Worldcom (July 21, 2002), which were
considered to have investment grade debt until a few days before their collapse. The valuation effect of
untimely changes in BRs (i.e., the losses or unrealized gains from late adjustments of trading strategies by
investors) has provided opportunities for new rating agencies to enter the market.
One such new investor-paid rating agency is Egan Jones Ratings (EJR), established in 1995. EJR
entered the market for BRs quite aggressively by criticizing both the issuer-paid model of its competitors
that can create conflicts of interest for a rating agency, and also the small number of NRSROs. Unlike its
major issuer-paid competitors, EJR’s compensation structure follows the investor-paid model, which
means that EJR sells subscriptions to institutional investors and rates (third-party) companies on demand.
This compensation model has normally precluded EJR from benefiting from close relations with key
managers and chief financial officers at the rated companies, which is customary (but not compulsory)
when the ratings are requested and paid for by companies issuing bonds. After several years of arguing in
favor of opening the market for BRs (SEC, 2002), EJR was granted the NRSRO designation by the SEC
on December 21, 2007.
2.3. Sample Selection
In this study we use EJR as the representative investor-paid rating agency, for the following
reasons: (a) it bases its ratings only on publicly available information, (b) it rates several industries
(financial, industrial, service sectors and more recently sovereign debt) (c) several academic studies have
chosen EJR to be the representative investor-paid CRA over the past decade (Johnson, 2004; Beaver et
al., 2006; Bruno et al., 2012; Berwart et al., 2012), (d) it was among the most vocal CRAs that
9
participated in the SEC debates on the grounds on which the NRSRO certification is granted, (e) it is one
of the oldest investor-paid CRAs, i.e. since 1995, (f) it received a lot of attention in the media, (g) it was
among the new CRAs that received NRSRO status in December 2007.
Turning to issuer-paid rating agencies, we focus on the largest rating agencies by market share
that are well established in the markets for both BRs and FSRs. Ratings by A.M. Best are not used for
three reasons: (a) BRs for A.M. Best are only available since 2005; (b) their ratings went through several
phases during the sample period (Doherty and Phillips (2002), find that the properties of A.M. Best rating
from 1988-1999 when S&P entered the market for FSRs); (c) the regulatory status of A.M. Best changed
in 2001, when it became an NRSRO.
We focus on changes in BRs and FSRs by Fitch and S&P. S&P entered the market for FSRs in
the 1980s to compete with A.M. Best. However, their rating standards until 1994 were capped at a
maximum rating of “BBB” regardless of the firm’s financial position and any ratings by competitors
(Doherty, et al., 2012). Our sample period begins in (May) 1996, when S&P was rating about 50% of the
insurance industry by value of assets. The second issuer-paid rating agency is Fitch, which strengthened
its presence in the insurance sector after the acquisition of Duff & Phelps Credit Rating Co. in 2000. Our
sample for Fitch thus begins in January of 2000. Moody’s does not have large coverage of the market for
insurance FSRs, and it is therefore excluded from the sample.
2.4. Hypotheses
Given the unique regulatory structure of the US insurance industry, and the importance that
insurance companies place on their creditworthiness to conduct business, it is important to identify if
there is a lead effect by one of the two types of CRAs in updating insurance BRs. Moreover, given that
insurance companies (and banks) are more opaque than other industries when it comes to the ratings
assigned by issuer-paid CRAs (Morgan, 2002), the timeliness of their announcements may also be
affected, hence it is important to examine the insurance industry in isolation from the remaining
industries.
As already mentioned investor-paid agencies have stronger incentives than issuer-paid agencies
10
to update their ratings faster, given that their compensation is heavily dependent on institutional investor
demand. At the same time however, starting in 2002, issuer-paid rating agencies were subject to heavy
criticism and regulatory scrutiny, such as: (a) the Sarbanes-Oxley Act in July 2002 and its reference to
CRAs, (b) The SEC Review of the role of CRAs in January 2003 (SEC, 2003), (c) the Congress Hearing
on the different types and operations of CRAs in April 2003, and (d) The “Credit Rating Duopoly Act” in
September 2006. Cheng and Neamtiu (2009) use a sample of companies that are close to default, before
2002 and after 2003, to find that this pressure on Fitch, Moody’s and S&P resulted in improvements in
the timeliness of their BRs, in predicting defaults. This finding suggests that since the relative timeliness
among issuer-paid rating agencies has improved close to the default boundary, the relative timeliness
between investor- and issuer-paid rating agencies may have also changed since 2002 (i.e. when compared
to the results of Beaver et al., 2006).
In line with this argument, Berwart et al. (2012) find evidence consistent with Cheng and
Neamtiu (2009), i.e. that issuer-paid agencies have improved their timeliness, relative to the investor-paid
agencies. Therefore, our preliminary hypothesis revisits the lead-lag relationship between issuer- and
investor-paid agencies (Beaver et al., 2006), specifically for insurance BRs, over a period that the
NRSRO status of the issuer-paid and the investor-paid agencies has not changed, but their relative
timeliness seems to have changed:
H0: Changes in insurance bond ratings by the investor-paid rating agency will lead (predict) changes
in insurance bond ratings by the issuer-paid rating agencies.
Our second research question focuses on the relation between changes in BRs and changes in
FSRs within the same rated firm. To our knowledge, no prior study investigates the potential link between
these two types of ratings. We know that policyholders have priority over debtholders in the case of firm
insolvency (Pottier and Sommer, 1999; Grace et al., 2003). However there are three possible scenarios for
the relative timing of changes in FSRs and BRs. First, a change in BRs might take place before a change
in FSR. This would happen if deterioration in servicing debt obligations justifies a change in BR but is
not severe enough to justify a change in FSR, for example, if an increase in the probability of not
11
servicing the firm’s debt obligations does not affect the probability of not meeting the firm’s policyholder
obligations. Second, a change in FSR and a change in BR might happen on the same day. This would
occur if either deterioration in default risk (change in BR) is severe enough to cause a change in FSR or
some other adverse condition causes a change in FSR; in either case, the BR is affected as it is
subordinate to the FSR. Because such a scenario would imply simultaneous changes in FSR and BR
(regardless of causation), the two would be empirically inseparable, and hence in our models we exclude
events taking place on the same day. Third, a change in FSR might happen before a change in BR. This
could happen because issuer-paid rating agencies are conservative with changes in BRs given that the
NRSRO certification applies to the market for BRs and not FSRs. Therefore, given the three potential
relationships between changes in FSRs and changes in BRs, we do make a prediction about the direction
of the lead-lag relation between changes in BRs and changes in FSRs within the same rated firm.
The third research question we examine builds on the above lead-lag relations. If changes in
investor-paid’s BRs lead changes in issuer-paid’s FSRs (H0), and if changes in issuer-paid’s BRs lead
changes in issuer-paid’s FSRs (or changes in FSRs do not occur before changes in BRs), then we expect
that not only will the investor-paid’s changes in BRs lead the issuer-paid’s changes in BRs, but they will
also lead the issuer-paid’s changes in FSRs. In the event that H0 is confirmed but either the relation
between changes in FSRs and changes in BRs is not conclusive or changes in FSRs leads changes in BRs,
then it remains an empirical question as to whether investor-paid changes in BRs lead issuer-paid changes
in FSRs.
Our remaining hypotheses examine investor reactions to announcements of changes in the ratings
used in the previous lead-lag relations. We start by explicitly testing for CAARs to changes in BRs. To
our knowledge no prior study explicitly investigates stock return reactions to changes in BRs for
insurance firms, mainly because studies on the insurance industry focus on changes in FSRs. In the
appendix (Figure A1) we show that the insurance market index behaves differently than market indices
typically used in other studies. A more accurate method to test for such abnormal returns would therefore
be to use an industry benchmark (industry return) (Halek and Eckles, 2010). The economic foundation for
12
our next hypothesis is based on the information value contained in BRs. Given BRs capture a different
type of risk than FSRs, in a market of “semi-strong” or “weak” form efficiency we expect abnormal stock
returns to be associated with changes in BRs for the sample of publicly traded insurance companies.
Rating agencies are expected to process public and/or private information better than investors and hence
to receive greater value from doing so (Ramakrishnan and Thakor, 1984, Millon and Thakor, 1985, Boot
et al., 2006). Splitting our sample of changes in BRs into upgrades and downgrades, we thus hypothesize
that:
H1a: Downgrades in insurance bond ratings (BRs) are associated with negative cumulative average
abnormal returns (CAARs).
H1b: Upgrades in insurance bond ratings (BRs) are associated with positive cumulative average
abnormal returns (CAARs).
There is ample evidence in the finance and accounting literature of an asymmetry in reactions to
downgrades and upgrades. Watts (1977) argues that the potential marginal loss from correcting a positive
bias is larger than the potential marginal gain from correcting a negative bias. In the case of changes in
ratings, a positive (negative) bias would be corrected by a(n) downgrade (upgrade). Consistent with this
argument, several empirical studies find larger (absolute) abnormal returns associated with downgrades
than upgrades (Holthausen and Leftwich, 1986; Hand et al., 1992; Ederington and Goh, 1998). Since
insurers depend on their reputation and creditworthiness to sell insurance, a downgrade could send the
company into a downward spiral; the “promise to reimburse future policyholder claims” is expected to
sound less trustworthy after a downgrade, and hence the subsequent non-renewal of insurance business
may adversely affect the present value of future premiums and consequently cause a decline in stock
prices. In the opposite case, even though an upgrade may bring additional business to the firm, we expect
that extra effort will be needed by the firm to capture the same amount of business that the company
would lose in the case of a downgrade. These arguments lead to the following hypothesis:
H2: Downgrades in insurance bond ratings are associated with larger (absolute) cumulative average
abnormal returns (CAARs) than upgrades.
13
Following Beaver et al. (2006) we expect that the CAARs to announcements of downgrades in
BRs by the investor-paid rating agency will be larger than the respective reactions to downgrades in BRs
by its issuer-paid competitors. The rationale is that the investor-paid rating agency strives to publish
timelier downgrades in BRs and hence provide greater value to their subscribers, since the investor-paid
rating agency’s compensation model is more tightly aligned with subscribers’ interests than the
compensation model of issuer-paid rating agencies. Our next hypotheses are thus as follows:
H3a: Downgrades in insurance bond ratings by the investor-paid agency are associated with more
negative CAARs than those of the issuer-paid agencies.
H3b: Upgrades in insurance bond ratings by the investor-paid agency are associated with more
positive CAARs than those of the issuer-paid agencies.
2.5. Is there Economic Value in the Predictability of Changes in Ratings?
We expect that investor-paid rating agencies will lead issuer-paid agencies (Beaver et al., 2006), and
we know that issuer-paid changes in FSRs are associated with statistically significant CAARs (Halek and
Eckles, 2010). Furthermore, we expect that changes in bond ratings will be associated with CAARs (H2).
Therefore, we expect that the post-announcement period of changes in ratings by investor-paid rating
agencies, which are followed by announcements of issuer-paid agencies (both FSRs and BRs), will be
associated with significant CAARs. The rationale is that issuer-paid agencies are typically more
conservative with changes in ratings (also because of their regulatory role), hence, when they confirm a
prior change by the investor-paid agency, the market perceives this announcement as informative.
Reaction to follower’s changes in ratings is also consistent to Halek and Eckles’ (2010) “reinforcement
hypothesis”. More formally:
H4: The post-announcement period of the leading rating agency’s changes in insurance ratings which
are subsequently followed by changes in insurance ratings of the follower rating agency, is
associated with cumulative average abnormal returns (CAARs).
In contrast, we do not expect CAARs in the post-announcement period of changes in ratings that are not
followed by other changes in ratings (i.e. either by issuer- or investor-paid agencies).
14
3. Empirical Methodology
3.1. Predictive Ordered Probit for Lead-Lag Relations
We use methodology from existing literature to test the lead-lag relationship between CRAs.
Specifically we use an ordered probit model similar to Güttler and Wahrenburg (2007) and Alsakka and
Gwilym (2010). In their paper, Güttler and Wahrenburg (2007) examine the lead-lag relationship of
changes in ratings by Moody’s and S&P, of a sample of companies that are close to default. Since their
analysis focuses on a significantly “stressed” sample of firms that eventually default, they use a binary
variable that captures the time difference between the announcement of dependent variable and the time
of default. Alsakka and Gwilym (2010) analyze lead-lag relationships in changes of sovereign credit
ratings, which also happen less frequently than changes in US corporate bond ratings.
To test for predictability (i.e. lead-lag effects) between rating agencies A and B, we use a
predictive ordered probit model with dependent and independent variables comprising changes in ratings
(“rating code” in Table 1) and lags of those variables, respectively. Lead-lag relations are tested in both
directions by running the following two models (Alsakka and Gwilym, 2010):
, ∑ , ∑ , (1)
, ∑ , ∑ , , (2)
where , ( , ) is the change in rating by agency A (B) for insurance firm on month . Dependent
variables in equations 1 and 2 take values of “-2, -1” for downgrades of one and more than one notch
respectively, and “+1, +2” for upgrades of one and more than one notch, respectively. Independent
variables account for changes in ratings in the previous months ∈ 1,6 . The range of possible
values of the independent variables is “-2, -1, 0, 1, 2” and they are coded in a similar way as the
dependent variables; the only addition is the value of zero, which represents the case of “no change in
rating”. and are distributed according to a standard normal distribution.
In equation 1 we test for the lead effect of over by controlling for the predictability of the
dependent variable ( ) by its own lags and then searching for additional predictability by the lags of .
15
We run equation 2 separately to test the case of reverse predictability ( leading ). To illustrate,
would lead if the coefficients in equation 1 are positive and significant but the coefficients in
equation 2 are not. 4
3.2. Event Studies for Economic Value of Announcement Predictability
We define an “event” as a day on which a change in rating is announced and use event study
analysis to test for the economic value associated with such announcements (Campbell et al., 1997). Our
testing period starts 45 trading days before the event day. The estimation period starts 300 trading days
and ends 46 trading days before the event day.5 Each firm’s industry adjusted return is estimated using:
, (3)
where is the return of security on day , ∈ 300, 46 , is the industry return on day
, ∈ 300, 46 , and is the residual, with zero-mean error. For the industry return on day , we
use the value-weighted “insurance market index”, comprised of all actively traded firms with standard
industry classification codes 6311, 6321, 6324, 6331, 6351, 6361, 6371, 6399 and 6411, we aggregate the
market value of equity (including dividends) and calculate the daily return of the index. We then use the
coefficients and to estimate security i’s expected returns, , over the testing period , ∈
45, 60 :
. (4)
Abnormal returns, , are estimated for each security as the difference between the observed return, ,
and the expected ex-post return over the testing period, , ∈ 45, 60 :
. (5)
We average abnormal returns across all downgrades every day to obtain:
4 To address the concern that some variables may be affecting simultaneously either changes in BRs, or changes in FSRs, or both, we have tried variations of our model with additional explanatory variables such as the combined ratio, short-term debt to total debt, net-to-gross premiums written, among others. We face three problems: (a) the reporting frequency of these variables is lower than monthly, (b) the timing of the announcement of such variables is uncertain, and (c) sample size decreases significantly to make the analysis meaningful. Results remain unchanged. 5 Robustness checks with different estimation periods have also been conducted with similar results.
16
∑ , (6)
and then calculate cumulative average abnormal returns from day to day as follows:
, ∑ ∑ . (7)
4. Data & Descriptive Statistics
4.1. Issuer-paid Rating Agencies
To conduct our analysis we collect in “Long-term Issuer Credit Ratings” to use as BRs and
“Insurer Financial Strength Ratings” to use as FSRs for the issuer-paid rating agencies. For S&P, data
from January 2000 through December of 2007 are available through SNL database; we also hand-collect
changes in BRs and FSRs announced from May of 1996 until 2000 from newswires (Dow Jones Global
Factiva). In the case of a group company, we identify the subsidiary with the largest “Net Total Assets”
by year and use its FSR as the group rating. We follow the same procedures to collect data for Fitch from
January 2000 until December 2007.
4.1.1. Standard & Poor’s Ratings (S&P)
[Table 1; Figure 2A; 2B; 2C]
The original sample of ratings by S&P comprises initial ratings, affirmations, upgrades and
downgrades. The sample of FSRs comprises 3,011 observations, of which 680 are changes in ratings (500
downgrades and 180 upgrades). Similarly, the sample of BRs comprises 1,696 observations, with 220
downgrades and 132 upgrades (Figure 2A).
S&P’s BRs and FSRs are distributed into 22 rating categories as shown in Table 1. FSR ratings
range from the highest quality of “AAA” (numerical value = 1) to the lowest quality of “R” (under
regulatory supervision; numerical value = 22), where any rating above “BB+” (below a numerical code of
10) is considered “financially secure”. BR ratings range from a high of “AAA” to a low of “D” (in
default), where any rating over a “BB+” rating (below a numerical code of 10) is considered “investment
grade”.
While most changes in BRs take place above the investment grade boundary, most changes in
17
FSRs are concentrated close to the threshold rating recommended by insurance brokers (Bradford, 2003).
This trend is consistent with the fact that there is usually a positive notching effect of FSRs over BRs
(Best, 2004).
4.1.2. Fitch Ratings
Turning to Fitch, the sample of FSRs comprises 1,868 observations, of which 341 are changes in
ratings (upgrades and downgrades). More than 75% of changes in FSRs by Fitch are downgrades (261 vs.
80 upgrades), with most of the action taking place above the financially secure grade. The sample of BRs
comprises 1,175 observations, with 144 upgrades and 128 downgrades. Most rating actions are
concentrated above the investment grade. Fitch’s BRs and FSRs are distributed into the same rating
categories as those of S&P except that Fitch does not use the CCC+ and CCC- categories for either type
of rating (Table 1; Figure 2B).
4.2. Investor-paid Rating Agency: Egan Jones Ratings (EJR)
Our sample begins with the first rating assignment to an insurance firm (August 1997) and ends
in December 2007. Data for the period 1997-2002 and 1999-2007 were generously provided by the
authors of Beaver et al. (2006) and Egan Jones Ratings, respectively. EJR rates publicly rated firms and
thus fewer companies than Fitch and S&P, which also rate private firms. We parse through a large sample
of companies and ratings to identify insurance firms or firms with sufficiently large insurance operations.
The original sample comprises 601 observations with 271 changes in BRs. The rating scale used by EJR
is identical to the scale used by S&P for BRs (Table 1). We observe a larger number of downgrades than
upgrades in BRs (161 vs. 110) by EJR, with more rating changes (especially upgrades) taking place above
the “investment grade” (Figure 2C).
4.3. Matched Samples
Table 2 reports descriptive statistics for our sample, which include firms rated by both investor-
and issuer-paid rating agencies. To maximize coverage we use the intersection of EJR and S&P from
August 1997 to December 2007 for the companies that have a BR from EJR, a BR from S&P and an FSR
by S&P. The sample comprises a maximum (minimum) of 460 (258) firm-year observations. The average
18
(median) firm-year has total asset value of $71,053 ($22,688) million, net premiums written of $7,406
($3,008) million and net-to-gross premiums written of approximately 87% (88%). Average (median)
short-term debt is $1,169 ($34) million, or 20.02% (12.20%) of the company’s total debt.
[Table 2]
5. Results
5.1. Lead-Lag (Predictability) Relations
[Table 3]
In our lead-lag analyses, we examine six pairs of ratings: (a) EJR Changes in BRs vs. S&P
Changes in BRs; (b) EJR Changes in BRs vs. Fitch Changes in BRs; (c) S&P Changes in BRs vs. S&P
Changes in FSRs; (d) Fitch Changes in BRs vs. Fitch Changes in FSRs; (e) EJR Changes in BRs vs. S&P
Changes in FSRs; and (f) EJR Changes in BRs vs. Fitch Changes in FSRs. When matching pairs for each
of the previous six combinations, we observe that issuer-paid agencies (i.e. combinations (c) and (d))
often seem to change their BRs and FSRs on the same day. Same-day changes in BRs and FSRs are not
included in our analysis.6 The remaining four combinations of ratings across agencies (i.e. (a), (b), (e) and
(f)) have negligible changes in ratings happening on the same day.
Table 3 shows the distribution of changes in ratings for the six matched samples of the pairs listed
above (changes happening on the same day are not included). Looking across all pairs, we find that
downgrades of more than one notch constitute a significant number of the downgrades in each sample.
We also find that the matching process significantly reduces the number of upgrades in each sample. For
each of the six pairs we run equations 1 and 2 and report results in the twelve panels of Table 4.
[Table 4, 5, 6]
Results of our preliminary hypothesis (H0) are given in Panels A-D of Table 4. Panels A and B
show the results of the lead-lag relation between changes in BRs by EJR and changes in BRs by S&P.
Positive (negative) coefficients in lagged variables make changes in the same direction as the dependent
variable more (less) likely. We find some evidence of predictability from EJR’s changes in BRs to S&P’s
6 By construction, the first lag of our analysis starts one day before and ends thirty days before the announcement.
19
changes in BRs. In particular, we find no positive and significant coefficients on S&P’s changes in BRs
(panel B) while the second lag of EJR is positive and significant (p<0.07).7 The evidence is stronger when
we examine the lead-lag relation between changes in BRs by EJR and changes in BRs by Fitch (Panels C
and D). The 2nd, 5th and 6th lag of EJR’s changes in BRs have positive and significant coefficients in Panel
C, and no coefficients on Fitch’s changes in BRs are positive and significant in Panel D. We therefore
find evidence in support of our preliminary hypothesis (H0): changes in BRs by the investor-paid rating
agency lead those of the issuer-paid rating agencies.
In Table 5, we test the lead-lag relation between changes in BRs and changes in FSRs of the
issuer-paid rating agencies. In the case of S&P, we find evidence of predictability from changes in FSRs
to changes in BRs: the coefficients on the 1st, 3rd and 4th lags of changes in FSRs are positive and
significant in Panel B, while similar results do not obtain in Panel A. In the case of Fitch, in contrast, the
1st and 3rd lags of changes in BRs predict changes in FSRs but the 5th lag makes them less likely (Panel C)
and the 4th lag of changes in FSRs predicts changes in BRs (Panel D). Based on these results we conclude
that information flows in both directions between BRs and FSRs of the same issuer-paid rating agency,
even though changes in FSRs appear to lead in the case of S&P. Results are similar for a longer time
period as shown in the robustness section (section 5.4.1).
Given that investor-paid rating agencies lead issuer-paid rating agencies (H0) and information
flows both ways between changes in FSRs and changes in BRs, it is natural to ask whether changes in
BRs by the investor-paid rating agency lead changes in FSRs by the issuer-paid rating agencies. We find
evidence that EJR’s changes in BRs lead S&P’s changes in FSRs (Table 6). Specifically, the 4th lag of
EJR’s changes in BRs in Panel A is positive and significant (p-value<0.03) whereas in Panel B none of
the lags of S&P’s changes in FSRs is positive and significant. The results are similar when we test the
lead-lag relation between EJR’s changes in BRs and Fitch’s changes in FSRs. In this case, the 2nd lag of
EJR’s changes in positive and significant, whereas no predictability is evident in the opposite direction
(Panel D).
7 The first lag of EJR is positive with a p-value of 0.102.
20
5.2. Stock Reactions to Changes in BRs and FSRs and Economic Value of Predictability
Having showing evidence of lead effects of investor-paid changes in BRs over issuer-paid
changes in BRs and changes in FSRs, we next examine how the market reacts to changes in BRs and also
test for any economic value associated with the leading agency’s announcements, that could potentially
be realized by investors following such announcements. To be included in the sample, firms must have
daily stock return data in CRSP database eighteen months before and three months after the change in
rating. In Table 7 we show market reactions for several event windows of changes in BRs by EJR, S&P
and Fitch, to test hypotheses 1-3. In the appendix (Table A2) we show the same results for issuer-paid
FSRs, which are consistent with earlier findings in the literature (Halek and Eckles, 2010). To test
hypothesis 4, in Table 8 we split the post-event event windows for the issuer-paid’s (leading agency’s)
downgrades, into those that are, and those that are not followed by other downgrades.
[Table 7]
In Table 7 (Panel A; first six rows) we show results for downgrades in BRs by EJR, which are
matched with S&P’s downgrades in BRs to have the largest possible intersection (n=107; results do not
change when matched with Fitch). Results reported for the issuer-paid rating agencies are matched with
EJR (for S&P: n=54; for Fitch: n=56). We find extensive evidence in support of hypothesis H1a: there are
negative CAARs associated with downgrades in bond ratings. Focusing on the windows centered on the
event day, CAARs for EJR range from -4.55% to -16.77% for the [0,0] to [-30,+30] respectively. CAARs
for S&P range from -2.26% to -9.79% for event windows [0,0] to [-30,+30]; Fitch’s range from -1.73% to
-11.47%, respectively. Results for all ratings agencies have p-values less than 0.01.
In Panel B (first six rows), we find evidence in support of hypothesis H1b for EJR and S&P: there
are positive CAARs associated with upgrades in bond ratings. CAARs for EJR range from 1.25% (p-
value < 0.01) to 1.55% (p-value<0.05) for the [0,0] to [-10,+10] respectively. For S&P there is no reaction
at the event and there are marginal reactions in the windows [-1,+1], [-2,+2], [-3,+3] and [-10,+10];
1.23% (p-value<0.01), 0.38% (p-value<0.10), 0.51% (p-value<0.10) and 0.85% (p-value<0.10),
respectively. The market does not seem to value Fitch’s upgrade announcements as informative as there
21
are no significant results for Fitch.
To test for the existence of asymmetry in CAARs from upgrades and downgrades within the same
rating agency (H2), we run t-tests on the absolute values of the estimated CAARs in panels A and B in
Table 7 (first six rows). For EJR, we find differences of 3.66%, 5.26%, 5.78%, 7.05%, 10.57% and
14.93% (all p-values<0.01) for windows [0,0], [-1,+1], [-2,+2], [-3,+3], [-10,+10] and [-30,+30],
respectively (untabulated results). Similar results obtain for S&P (Fitch), with mean differences of 2.26%
(2.49%), 5.09% (4.94%), 5.60% (6.48%), 6.61% (6.58%), 9.91% (9.00%) and 13.07 (11.33%) and
maximum p-value < 0.05 (p-value < 0.01). These results provide evidence in support of H2. Results for
upgrades are of smaller statistical and economic significance, consistent with the results of Holthausen
and Leftwich (1986), Hand et al. (1992), Ederington and Goh (1998) and Halek and Eckles (2010).
Next, we test for differences in market reactions to changes in bond ratings between investor-paid
and issuer-paid rating agencies (H3a and H3b). Results are reported in Table 7 and specifically in the
columns showing mean differences (MD) of investor-paid CAR minus issuer-paid CAR for each event
window reported. We find evidence in support of both hypotheses: market reactions are larger for the
investor-paid agency than those of S&P and Fitch.
In the case of downgrades (Panel A), we find a negative mean difference between EJR’s event
day CAARs over the respective CAARs for S&P and Fitch. In the case of S&P the market reaction is
2.29% (p-value < 0.01) and in the case of Fitch it is 2.82% (p-value < 0.01) lower than EJR’s. In the case
of upgrades (Panel B), we find positive mean differences between the two types of rating agencies, with
EJR having 0.99% (p-value < 0.05) and 1.20% (p-value < 0.01) larger (more positive) CAARs than S&P
and Fitch, respectively. Differences also exist in broader event windows centered at the event: for S&P’s
downgrades there exists a difference of -6.98% (p-value < 0.10) at the [-30,+30] and for Fitch -0.59% (p-
value < 0.05) at [-3,+3]. For S&P’s upgrades the difference is 1.37% (p-value < 0.10) at the [-2,+2], and
for Fitch’s upgrades the difference peaks at 2.39% (p-value < 0.01) at [-3,+3]. These results suggest that
investor-paid agencies’ downgrades and upgrades in BRs are associated with stronger market reactions
22
than those of issuer-paid agencies, thus providing support for hypotheses H3a and H3b respectively.8
In un-tabulated results, for each of the three rating agencies, we also test if the actual
announcement of a change in BR is associated with larger abnormal returns (in absolute terms) than
abnormal returns of each of the three previous days as shown in Table 7. For the investor-paid agency’s
downgrades, we find that abnormal returns associated with the [0,0] window are larger (more negative)
than those of the [-3,-3] window (mean difference = -4.63%; p-value < 0.01), the [-2,-2] window (mean
difference = -3.43%; p-value < 0.01) and the [-1,-1] window (mean difference = -2.96%; p-value < 0.01).
For the issuer-paid rating agencies the only significant difference is between the [0,0] window and [-1,-1]
window for Fitch, with a mean difference of -1.36% (p-value < 0.05).9 This jump in event day returns
suggests that EJR exerts effort (at least more effort than issuer-paid agencies) to communicate their
analysis of publicly available information to the market quickly, in which case either their downgrades
cause “breaks” in abnormal returns or their announcements coincide with a publicly available event that
causes a sharp decrease in stock returns.
5.3. Economic Value of Predictability (H4)
[Table 8]
Given the evidence of predictability from investor-paid changes in BRs to issuer-paid changes in
FSRs and changes in BRs, we then test if this predictability is linked to any abnormal market returns.
Since predictability is stronger for downgrades, and also given the asymmetry between market reactions
to upgrades and downgrades, we focus on downgrades for our analysis. In Table 7 we also show event
windows before, at and after changes in BRs by both types of agencies. In all cases, we observe that the
period before downgrades in BRs is associated with large CAARs (all p-values < 0.01). Specifically, for
EJR’s downgrades in BRs (panel A) there is a CAAR of -8.31%, -6.36% and -4.31% respectively for the
[-30,-1], [-20,-1] and [-10,-1], respectively. For S&P, the respective CAARs are -8.67%, -5.76% and -
8 Similar mean differences exist in stock market reactions between investor-paid changes in BRs and issuer-paid differences in FSRs, as shown in Table A2 in the appendix. 9 Paired t-tests for upgrades show a mean difference of no less than 1.10% (p-value < 0.05) for EJR for all three combinations. For issuer-paid rating agencies, only one test is significant for S&P: the difference between the [0,0] and [-2,-2] windows, with a mean difference of 0.98% (p-value < 0.01).
23
5.38%, and for Fitch: -12.35%, -10.06% and -7.59%.
Evidence of the value captured by the investor-paid agency is shown in the post-event windows
(Table 7; Panel A). There are a few facts to note here. First, the unconditional ratings’ downgrades by the
investor-paid agency are followed by post-event CAARs that range from -2.17% to -3.92% for the [+1,
+10] to [+1,+30] event windows (p-value < 0.01). These CAARs show that investors can earn about 4% if
they short-sell stocks immediately after an EJR’s bond rating downgrade and close-out their position
thirty trading days later.
Second, issuer-paid post-event CAARs for the same period are not only of weaker statistical
significance, but in the opposite direction to the investor-paid’s CAARs. Specifically, for S&P’s the
market has a CAAR of 0.21% (p-value<0.10) to 2.56% (p-value<0.01) in the event windows [+1, +10]
and [+1,+30] respectively. For Fitch the respective CAARs are 1.31% (p-value<0.05) to 2.61% (p-
value<0.05). These results can be explained as a market correction of the negative (over-) reaction in the
period before and up to the downgrade by the issuer-paid agency. For S&P and Fitch, CAARs are about -
11% and -14%, respectively, for the thirty days prior to and including the event day. A similar post-event
correction obtains for changes in FSRs (Table A2).
Even though the results in panel A of Table 7 (post-event) show that EJR’s downgrades are
unconditionally followed by a post-event negative drift, these results do not yet address the last
hypothesis (H4). In Table 8 we split the EJR sample of downgrades in two portfolios: those EJR’s
downgrades in BRs followed by at least another downgrade by S&P, Fitch or EJR, in the following ninety
days, and the remaining EJR’s downgrades in BRs that are not followed by other downgrades. We find
that EJR’s downgrades followed by downgrades (n=40), are associated with both statistically and
economically post-event CAARs ranging from -3.7% to -10.62% for the first ten to thirty trading days
after the event, respectively. The remaining observations (n=67; EJR’s downgrades not followed by
downgrades) are not associated with significant post-event negative drift.10 Therefore, even though
10 We have also split the sample of EJR downgrades in BRs, into those followed only by downgrades of S&P and Fitch, and those that are not. Results remain qualitatively the same.
24
investors can unconditionally earn 4% (Table 7; EJR’s [+1,+30]), this profit will be driven by those
downgrades that are followed by more downgrades, in the following ninety days.
Our results show that the post-event negative drift shown in Table 7 is not evidence of market
inefficiency or an anomaly or an under-reaction to downgrade announcement (Dichev and Piotroski,
2001), but most likely a result of the predictability of future downgrades by EJR’s downgrades (Table 4
and 6), as explained in Table 8. Another reason that might contribute to the post-event negative drift, is
the limited release of information by EJR to their subscribers only, which might explain the negative
CAAR in the event window [+1,+10] for the sample of EJR’s downgrades not followed by other
downgrades (Table 6). A third possible reason for the post-event negative drift is that the first investor-
paid downgrade acts as a “warning” of upcoming deterioration in creditworthiness (downgrade in BR)
and solvency (downgrade in FSR), and thus validation of such deterioration by a subsequent downgrade
causes additional negative CAARs. This rationale is reasonable for the U.S. insurance industry as it is
heavily dependent on its reputational capital to conduct business (Halek and Eckles, 2010).
Results for upgrades are weaker, consistent with prior literature and previous results showing an
asymmetry between upgrades and downgrades. In the post-event period, we do not observe any
statistically significant results for any rating agency, which is also consistent with the documented
asymmetry between upgrades and downgrades explained above.
5.4. Robustness Tests
5.4.1. Lead –Lag Relations
Our first test examines the robustness of the relation between the changes in BRs of EJR and S&P
(Table 4; Panel A), since it has a small sample size. In untabulated analyses we use three bi-monthly lags
instead of six monthly lags for each rating agency. The first lag of EJR is positive (p-value < 0.01) in the
equation with S&P changes in BRs as the dependent variable. Since all other lags are insignificant, our
results suggest that EJR’s changes in BRs continue to lead S&P’s changes in BRs. The same also applies
for EJR and Fitch, as we conduct the same analysis for the relationship between changes in BRs of EJR
and FSRs of Fitch (Table 6; Panel C); the only positive and significant coefficient is the 2nd lag of EJR (p-
25
value < 0.01).
[Table A1]
Our second test examines the robustness of the lead-lag relations between the issuer-paid
agencies’ changes in BRs and changes in FSRs for the period after December of 2007, when EJR became
certified as an NRSRO (Table A1). Our data for S&P and Fitch end in December of 2009. In Table A1 we
show a more balanced flow of information between the two types of ratings than the period up to
December 2007 (Table 5), where changes in both BRs and FSRs seem to predict each other, for both S&P
and Fitch.
5.4.2. Event Studies
[Table A2]
Our first test aligns our results with prior literature on the existence of CAARs surrounding the
announcement of changes in FSRs by issuer-paid rating agencies (Halek and Eckles, 2010). We confirm
prior literature by documenting results for downgrades and upgrades separately in Table A2. Specifically,
for S&P (Fitch), we find negative CAARs for downgrades for several pre-event windows: from -4.03% to
-8.79% (-6.18% to 11.78%) for the windows [-10,-1] to [-30, -1] respectively; event day CAARs are -
2.01% (-2.15%). Results for upgrades are weaker with only one significant result for S&P of +1.33% in
the window [-3,+3]. In addition, using t-tests based on CAARs’ absolute values, we find an asymmetric
reaction to bad news, that is, larger absolute CAARs for downgrades than upgrades (untabulated).
More importantly, we document a similar trend in CAARs in the pre- and post-event period for
downgrades in FSRs. For S&P and Fitch, the negative trend reverses in the thirty days after the event with
CAARs of +1.72% (p-value<0.01) and 7.06% (p-value<0.01), respectively. This reversal is likely due to a
correction of the market’s over-reaction for the period before and up to the announcement. For upgrades,
we observe weaker positive reactions prior to the announcement and a mild negative reaction after the
announcement, which is consistent with the results for downgrades.
26
6. Conclusion
The heterogeneity in the compensation structure of rating agencies creates different incentives in
the timeliness of changes in ratings. Given that investor-paid rating agencies base their ratings almost
entirely on publicly available information and they are paid by (institutional) investors to rate investment
targets, they have a stronger incentive to publish changes in ratings faster than issuer-paid rating agencies
that are paid by the company they rate. Before December of 2007 all rating agencies with an NRSRO
certification had an issuer-paid compensation model, and their delay in releasing changes in ratings came
under intense scrutiny by regulators. Following this scrutiny, issuer-paid agencies have improved their
timeliness. In this paper we revisit the lead-lag relationship between changes in bond ratings published by
investor-paid and issuer-paid agencies specifically for the US insurance industry. We focus on the
insurance industry because it is more opaque than other industries, it is rated on more than one risk
dimension (bond ratings and financial strength ratings) and insurance financial strength ratings are
associated with market discipline.
We find evidence of a lead effect of investor-paid over issuer-paid agencies in the market for
insurance bond ratings, even though the relationship for the universe of US firms seem to have turned bi-
directional. Second, we examine the relationship between financial strength ratings (FSRs) and bond
ratings (BRs) within the same rating agency for the insurance industry and find that information flows in
both directions between the two types of ratings. This internal flow of information and the lead effect of
investor-paid over issuer-paid rating agencies in changes in BRs, point to a link between investor-paid
changes in BRs and issuer-paid changes in FSRs. In particular, we find that the incentive of investor-paid
rating agencies to publish changes in ratings on a timely basis has predictive effects not only in the market
for changes in BRs, but also in the market for changes in FSRs by issuer-paid rating agencies. This
predictability is important because FSRs encourage market discipline in the insurance industry, and are
also deemed as informative by investors as they are associated with significant cumulative abnormal
returns.
The lead-lag relations we document have economic value for investors. In particular, we find that
27
downgrade announcements of BRs by the (leading) investor-paid agency, are followed by -3.92% CAARs
for the thirty days after the announcement. This negative, post-announcement drift is attributed to those
downgrades that are later followed by other downgrades in BRs and FSRs (-10.62%). On the contrary,
investor-paid downgrades not followed by other downgrades are not associated with significant CAARs
over the same period. Our results have important implications in current market conditions where
interconnectedness and risk contagion play a significant role in shaping financial reports.
References
Adiel, R., 1996. Reinsurance and the management of regulatory ratios and taxes in the property-casualty
insurance industry. Journal of Accounting & Economics 22, 207-240.
Alsakka, R., ap Gwilym, O., 2010. Leads and lags in sovereign credit ratings. Journal of Banking &
Finance 34, 2614-2626.
Anthony, J.H., Petroni, K.R., 1997. Accounting Estimation Disclosures and Firm Valuation in the
Property-Casualty Insurance Industry. Journal of Accounting, Auditing & Finance 12, 257-281.
Asquith, P., Beatty, A., Weber, J., 2005. Performance pricing in bank debt contracts. Journal of
Accounting & Economics 40, 101-128.
Beaver, W.H., Shakespeare, C., Soliman, M.T., 2006. Differential properties in the ratings of certified
versus non-certified bond-rating agencies. Journal of Accounting & Economics 42, 303-334.
Berger, L.A., Cummins, J.D., Tennyson, S., 1992. Reinsurance and the Liability Insurance Crisis. Journal
of Risk & Uncertainty 5, 253-272.
Berwart, E., Guidolin, M., Milidonis, A., 2012. An Empirical Analysis of Changes in the Relative
Timeliness of Issuer-paid vs. Investor-paid Ratings. Working Paper.
Best, A.M., 2004. A.M. Best’s Ratings & the Treatment of Debt.
Beyer, A., Cohen, D.A., Lys, T.Z., Walther, B.R., 2010. The financial reporting environment: Review of
the recent literature. Journal of Accounting & Economics 50, 296-343.
Blume, M.E., Lim, F., Mackinlay, A.C., 1998. The Declining Credit Quality of U.S. Corporate Debt:
28
Myth or Reality? Journal of Finance 53, 1389-1413.
Bolton, P., Freixas, X., Shapiro, J., 2012. The Credit Ratings Game. Journal of Finance 67, 85-112.
Boot, A.W.A., Milboum, T.T., Schmeits, A., 2006. Credit Ratings as Coordination Mechanisms. Review
of Financial Studies 19, 81-118.
Bradford, M., 2003. Big changes at Kemper prompt buyer concerns. Business Insurance 37, 1.
Bruno, V., Cornaggia, J., and Cornaggia, K., 2012. Does Certification Affect the Information Content of
Credit Ratings? . Available at SSRN 1962840.
Campbell, J.Y., Lo, A.W., MacKinlay, A.C., 1997. The Econometrics of Financial Markets. Princeton
University Press, Princeton, NJ.
Cantor, R., Packer, F., 1997. Differences of opinion and selection bias in the credit rating industry.
Journal of Banking & Finance 21, 1395-1417.
Chen, X., Doerpinghaus, H., Lin, B.-X., Yu, T., 2008. Catastrophic Losses and Insurer Profitability:
Evidence From 9/11. Journal of Risk & Insurance 75, 39-62.
Cheng, M., Neamtiu, M., 2009. An empirical analysis of changes in credit rating properties: Timeliness,
accuracy and volatility. Journal of Accounting & Economics 47, 108-130.
Cummins, J.D., Harrington, S.E., Klein, R., 1995. Insolvency experience, risk-based capital, and prompt
corrective action in property-liability insurance. Journal of Banking & Finance 19, 511-527.
Cummins, J.D., Danzon, P.M., 1997. Price, financial quality, and capital flows in insurance markets.
Journal of Financial Intermediation 6, 3-38.
Dichev, I.D., Piotroski, J.D., 2001. The Long-Run Stock Returns Following Bond Ratings Changes.
Journal of Finance 56, 173-203.
Doherty, N.A., Kartasheva, A.V., Phillips, R.D., 2012. Information effect of entry into credit ratings
market: The case of insurers' ratings. Journal of Financial Economics 106, 308-330.
Doherty, N.A., Phillips, R.D., 2002. Keeping up with the Joneses: Changing Rating Standards and the
Buildup of Capital by U.S. Property-Liability Insurers. Journal of Financial Services Research 21, 55.
Doherty, N.A., Tinic, S.M., 1981. Reinsurance under Conditions of Capital Market Equilibrium: A Note.
29
Journal of Finance 36, 949-953.
Ederington, L.H., 1985. Classification Models and Bond Ratings. Financial Review 20, 237-262.
Ederington, L.H., 1986. Why Split Ratings Occur. Financial Management (1972) 15, 37-47.
Ederington, L.H., Goh, J.C., 1998. Bond Rating Agencies and Stock Analysts: Who Knows What When?
Journal of Financial & Quantitative Analysis 33, 569-585.
Ellul, A., Jotikasthira, C., Lundblad, C.T., 2011. Regulatory pressure and fire sales in the corporate bond
market. Journal of Financial Economics 101, 596-620.
Epermanis, K., Harrington, S.E., 2006. Market Discipline in Property/Casualty Insurance: Evidence from
Premium Growth Surrounding Changes in Financial Strength Ratings. Journal of Money, Credit &
Banking (Ohio State University Press) 38, 1515-1544.
Fenn, G.W., Cole, R.A., 1994. Announcements of asset-quality problems and contagion effects in the life
insurance industry. Journal of Financial Economics 35, 181-198.
Goh, J.C., Ederington, L.H., 1993. Is a Bond Rating Downgrade Bad News, Good News, or No News for
Stockholders? Journal of Finance 48, 2001-2008.
Goh, J.C., Ederington, L.H., 1999. Cross-Sectional Variation in the Stock Market Reaction to Bond
Rating Changes. Quarterly Review of Economics & Finance 39, 101.
Grace, M.F., Klein, R., Phillips, R.D., 2003. Insurance Company Failures: Why Do they Cost so Much?
Georgia State University Center for Risk Management and Insurance Working Paper
Grace, M.F., Harrington, S.E., Klein, R., 1998. Risk-Based Capital and Solvency Screening in Property-
Liability Insurance: Hypotheses and Empirical Tests. Journal of Risk & Insurance 65, 213-243.
Grace, M.F., Klein, R.W., Kleindorfer, P.R., 2004. Homeowners Insurance With Bundled Catastrophe
Coverage. Journal of Risk & Insurance 71, 351-379.
Güttler, A., Wahrenburg, M., 2007. The adjustment of credit ratings in advance of defaults. Journal of
Banking & Finance 31, 751-767.
Halek, M., Eckles, D.L., 2011. Determinants of Abnormal Reactions to Insurer Stock Returns: Evidence
of Asymmetric Responses. Working Paper.
30
Halek, M., Eckles, D.L., 2010. Effects of Analysts' Ratings on Insurer Stock Returns: Evidence of
Asymmetric Responses. Journal of Risk & Insurance 77, 801-827.
Hand, J.R.M., Holthausen, R.W., Leftwich, R.W., 1992. The Effect of Bond Rating Agency
Announcements on Bond and Stock Prices. Journal of Finance 47, 733-752.
Holthausen, R.W., Leftwich, R.W., 1986. The effect of bond rating changes on common stock prices.
Journal of Financial Economics 17, 57-89.
Johnson, R., 2004. Rating Agency Actions Around the Investment-Grade Boundary. Journal of Fixed
Income 13, 25-37.
Kaplan, R.S., Urwitz, G., 1979. Statistical Models of Bond Ratings: A Methodological Inquiry. Journal of
Business 52, 231-261.
Kisgen, D.J., Strahan, P.E., 2010. Do Regulations Based on Credit Ratings Affect a Firm's Cost of
Capital? Review of Financial Studies 23, 4324-4347.
Kliger, D., Sarig, O., 2000. The Information Value of Bond Ratings. Journal of Finance 55, 2879-2902.
Milidonis, A., Wang, S., 2007. Estimation of distress costs associated with downgrades using regime-
switching models. North American Actuarial Journal 11, 42-60.
Millon, M.H., Thakor, A.V., 1985. Moral Hazard and Information Sharing: A Model of Financial
Information Gathering Agencies. Journal of Finance 40, 1403-1422.
Morgan, D.P., 2002. Rating banks: Risk and uncertainty in an opaque industry. The American Economic
Review 92.4, 874-888.
Patell, J.M., 1976. Corporate Forecasts of Earnings per Share and Stock Price Behavior: Empirical Tests.
Journal of Accounting Research 14, 246-276.
Petroni, K.R., Ryan, S.G., Wahlen, J.M., 2000. Discretionary and non-discretionary revisions of loss
reserves by property-casualty insurers: Differential implications for future profitability, risk and market
value. Review of Accounting Studies 5, 95-125.
Phillips, R.D., Cummins, J.D., Allen, F., 1998. Financial Pricing of Insurance in the Multiple-Line
Insurance Company. Journal of Risk & Insurance 65, 597-636.
31
Pinches, G.E., Singleton, J.C., 1978. The adjustments of stock prices to bond rating changes. Journal of
Finance 33, 29-44.
Pottier, S.W., 1997. Life Insurer Risk Characteristics and the Rating Process. Journal of Insurance Issues
20, 111-130.
Pottier, S.W., 1998. Life Insurer Financial Stress, Best's Ratings and Financial Ratios. Journal of Risk &
Insurance 65, 275-288.
Pottier, S.W., 2007. The Determinants of Private Debt Holdings: Evidence From the Life Insurance
Industry. Journal of Risk & Insurance 74, 591-612.
Pottier, S.W., Sommer, D.W., 1999. Property-liability insurer financial strength ratings: differences across
rating agencies. Journal of Risk & Insurance 66, 621-642.
Ramakrishnan, R.T.S., Thakor, A.V., 1984. Information Reliability and a Theory of Financial
Intermediation. Review of Economic Studies 51, 415.
Schwartz, R., 1994. Good, Better, Best, Reactions.
SEC, 2002. Hearings on Credit Rating Agencies. U.S. Securities and Exchange Commission.
SEC, 2003. Report on the Role and Function of Credit Rating Agencies in the Operation of the Securities
Markets As Required by Section 702(b) of the Sarbanes-Oxley Act of 2002. U.S. Securities and Exchange
Commission.
Sengupta, P., 1998. Corporate disclosure quality and the cost of debt. Accounting Review 73, 459.
Skreta, V., Veldkamp, L., 2009. Ratings shopping and asset complexity: A theory of ratings inflation.
Journal of Monetary Economics 56, 678-695.
Sommer, D.W., 1996. The Impact of Firm Risk on Property-Liability Insurance Prices. Journal of Risk &
Insurance 63, 501-514.
Strobl, G., Xia, H. , 2010. The issuer-pays rating model and ratings inflation: Evidence from corporate
credit ratings. . Working Paper.
Tang, T.T., 2009. Information asymmetry and firms’ credit market access: Evidence from Moody's credit
rating format refinement. Journal of Financial Economics 93, 325-351.
32
Watts, R.L., 1977. Corporate finacnial statements, a product of the market and political process.
Australian Journal of Management (University of New South Wales) 2, 53.
Weinstein, M.I., 1977. The effect of a rating change announcement on bond price. Journal of Financial
Economics 5, 329-350.
Weiss, M.A., Chung, JH, 2004. U.S. Reinsurance Prices, Financial Quality, and Global Capacity. Journal
of Risk & Insurance 71, 437-467.
White, L.J., 2010. Markets the credit rating agencies. Journal of Economic Perspectives 24, 211-226.
33
Tables and Figures
Figure 1: Market Structure of Bond and Financial Strength Ratings
Types of Ratings published by types of Credit Rating Agencies
Bond Ratings
(BRs)
Financial Strength Ratings (FSRs)
Investor-paid: Egan Jones Ratings
Yes
No
Issuer-paid: Fitch; Standard & Poor’s
Yes
Yes
Investor-paid rating agencies are paid by institutional investors to rate third parties. Issuer-paid rating
agencies are mostly paid by companies they rate, even though they also receive subscription-based
compensation by (institutional) investors. Egan Jones Ratings publishes BRs. Fitch and Standard &
Poor’s publish both BRs and FSRs.
34
Figure 2: Frequency of Changes in Bond Ratings and Changes in Financial Strength Ratings.
Panel A: S&P: Changes in BRs (Left) and Changes in FSRs (Right) from 05/1996 to 12/2007.
Panel B: Fitch: Changes in BRs (Left) and Changes in FSRs (Right) from 01/2000 to 12/2007.
Panel C: Egan Jones Ratings: Changes in BRs from 08/1997 to 12/2007.
Y-axis shows the frequency of changes in ratings. X-axis shows the rating after the change took place. The mapping between the values on the x-axis and the letter scale of each rating agency is explained in Table 1.
0
5
10
15
20
25
30
35
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Fre
qu
ency
Rating
Upgrades Downgrades
0
10
20
30
40
50
60
70
80
90
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Fre
qu
ency
Rating
Upgrades Downgrades
0
5
10
15
20
25
30
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Fre
qu
ency
Rating
Upgrades Downgrades
0
5
10
15
20
25
30
35
40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Fre
qu
ency
Rating
Upgrades Downgrades
0
5
10
15
20
25
30
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Fre
qu
ency
Rating
Upgrades Downgrades
Egan Jones Ratings do not publish FSRs.
35
Table 1: Distribution of Bond and Financial Strength Ratings.
Bond(BR) Financial Strength (FSR) Rating Rating S&P Fitch EJR S&P Fitch Letter Code UP DN UP DN UP DN UP DN UP DN AAA 1 5 0 1 0 0 0 17 0 4 0AA+ 2 2 6 1 2 1 0 9 21 13 16AA 3 18 18 7 3 5 2 33 44 11 37AA- 4 23 22 13 6 5 7 44 68 4 35A+ 5 17 28 23 8 13 7 34 77 13 23A 6 11 30 15 15 23 13 14 72 5 25A- 7 9 21 17 18 27 21 11 59 7 26BBB+ 8 13 16 24 9 15 20 7 38 7 13BBB 9 9 11 16 14 8 24 2 26 7 18BBB- 10 5 15 6 9 8 21 0 20 5 7
BB+ 11 7 10 9 6 4 15 2 24 1 10BB 12 3 9 4 6 0 11 3 13 1 13BB- 13 3 4 4 3 1 7 2 9 0 15B+ 14 1 3 3 3 0 1 0 9 0 2B 15 3 6 1 2 0 4 1 0 1 5B- 16 1 3 0 4 0 2 1 3 1 4CCC+ 17 1 4 0 0 0 0 CCC 18 1 3 0 5 0 2 0 8 0 3CCC- 19 0 3 0 1 0 1 CC 20 0 3 0 4 0 2 0 1 0 2C 21 0 0 0 6 0 0 0 0 0 0D / R 22 0 5 0 5 0 1 0 7 0 7
Total 132 220 144 128 110 161 180 500 80 261
This table presents descriptive statistics of Bond Ratings (BRs) and Financial Strength Ratings (FSRs) of
the two issuer-paid rating agencies (S&P and Fitch) and BRs of the investor-paid rating agency (Egan
Jones Ratings; EJR). UP (DN) denotes upgrades (downgrades). Ratings above “BB+” (dotted line) are
considered “investment grade” for BRs and “financially secure” for FSRs. A “D” (“R”) rating
corresponds to the lowest category of BRs (FSRs). In BRs and FSRs of Fitch, CCC+ and CCC- are not
used. In BRs of Fitch category RD (restricted default) and D (default) are merged together. EJR and S&P
(BRs and FSRs) use the same scale. The sample for BRs and FSRs of S&P, Fitch and EJR covers the
period 05/96, 01/00 and 08/97 (respectively) to 12/07. Sources for Definitions (as at May 3, 2010): (1)
S&P Global Credit Portal, Ratings’ Definitions; (2) EJR’s Website; (3) Fitch Ratings’ Website.
36
Table 2: Descriptive Statistics Label N Mean Std Dev Median Lower
Quartile Upper
Quartile
Total Assets ($millions) 460 71,053 127,560 22,688 7,340 79,046
Total Policy Income ($millions) 460 8,130 11,224 3,589 1,033 11,825
Net Investment Income ($millions) 460 2,098 3,557 747 258 2,268
Total Revenue ($millions) 460 11,127 15,267 5,318 1,680 14,175
Total Policy Expense ($millions) 460 7,147 9,989 2,952 895 9,275
Total Expense ($millions) 460 9,982 13,529 4,767 1,399 12,519
Net Income ($millions) 460 818 1,541 437 150 954
Net Premiums Written ($millions) 299 7,406 11,588 3,008 846 9,285
Gross Life Insurance in Force ($millions) 393 353,961 687,280 56,680 - 459,995
Long-term Debt - Current ($millions) 367 136 557 - - 91
Short-term Debt ($millions) 424 1,169 4,718 34 - 294
Short-term & Current Long-term Debt ($millions) 418 989 3,949 102 0 382
GAAP Loss Ratio (%) 280 64.85 25.28 69.63 58.56 77.05
GAAP Combined Ratio (%) 265 90.12 26.73 95.40 85.70 102.35
Net Writ Premiums/ Gross Writ Premiums (%) 258 86.53 11.16 88.30 81.45 94.07
Net Losses/ Net Premiums Earned (%) 438 75.84 29.88 74.99 64.91 86.80
Short-term Debt/ Debt (%) 411 20.02 24.58 12.20 0.14 29.18
Long-term Debt/ Debt (%) 412 79.96 24.55 87.74 70.85 99.86
Subordinated Debt/ Debt (%) 446 6.33 13.82 - - 6.25
Commercial Paper/ Debt (%) 409 6.38 13.31 - - 7.55
Descriptive Statistics for the sample of firm-years rated by both EJR (BRs) and S&P (BRs and FSRs) from August 1997 until December 2007. N
is the number of firm-year observations. GAAP stands for generally accepted accounting principles. In our analysis we also use Fitch’s ratings but
since their sample period is shorter than the sample period of S&P, we choose the show the larger sample. Source: SNL Financial (2011):
www.snl.com.
37
Table 3: Distribution of Changes in Ratings
1 notch upgrade
≥2 notch upgrade
Total upgrades
1 notch downgrade
≥2 notch downgrade
Total downgrades
All changes
Standard & Poor's (BR) and Egan Jones (BR): 60 companies S&P_BR 24 6 30 39 12 51 81EJR_BR 62 6 68 90 19 109 177
Fitch (BR) and Egan Jones (BR): 58 companies Fitch_BR 54 5 59 45 12 57 116EJR_BR 56 5 61 79 19 98 159
Standard & Poor's (BR and FSR): 269 companies S&P_BR 75 13 88 95 45 140 228S&P_FSR 112 46 158 240 96 336 494
Fitch (BR and FSR): 91 companies Fitch_BR 72 10 82 41 16 57 139Fitch_FSR 30 14 44 64 28 92 136
Standard & Poor's (FSR) and Egan Jones (BR): 47 companies S&P_FSR 25 14 39 59 26 85 124EJR_BR 52 8 60 74 18 92 152
Fitch (FSR) and Egan Jones (BR): 46 companies Fitch_FSR 12 11 23 39 15 54 77
EJR_BR 45 7 52 59 17 76 128
The distribution of rating changes in Bond Ratings (BRs) and Financial Strength Ratings (FSRs) by
Standard & Poor’s Ratings (S&P), Fitch’s Ratings and Egan Jones Ratings (EJR). The matched sample of
S&P and Fitch covers the period January 2000 to December 2007. The matched sample of S&P and EJR
covers the period August 1997 to December 2007. The matched sample of Fitch and EJR covers the
period January 2000 to December 2007. The matched sample for BRs and FSRs of S&P (Fitch) covers
the period May 1996 (January 2000) to December 2007. Changes in ratings taking place on the same day
for each of the six combinations of matched ratings are not included in the numbers above.
38
Table 4: Lead-Lag Relations Between Issuer-paid and Investor-paid Bond Ratings.
Independent Var. Coeffs Z-val. SS Independent Var. Coeffs Z-val. SS A. SPR BR follower, EJR BR leader B. EJR BR follower, SPR BR leader
EJR ΔBR t-1 0.30 1.64 EJR ΔBR t-1 0.03 0.12EJR ΔBR t-2 0.44 1.81 * EJR ΔBR t-2 0.39 2.56 ***EJR ΔBR t-3 0.30 0.90 EJR ΔBR t-3 0.06 0.13EJR ΔBR t-4 0.19 0.64 EJR ΔBR t-4 0.25 0.70EJR ΔBR t-5 -0.11 -0.41 EJR ΔBR t-5 0.38 1.46EJR ΔBR t-6 0.16 0.37 EJR ΔBR t-6 0.28 1.93 * SPR ΔBR t-1 0.23 0.32 SPR ΔBR t-1 0.14 0.39SPR ΔBR t-2 -0.61 -1.35 SPR ΔBR t-2 -0.26 -0.67SPR ΔBR t-3 -0.22 -0.45 SPR ΔBR t-3 -0.88 -0.82SPR ΔBR t-4 1.11 2.03 ** SPR ΔBR t-4 0.27 0.74SPR ΔBR t-5 0.69 1.44 SPR ΔBR t-5 -0.08 -0.27SPR ΔBR t-6 SPR ΔBR t-6 -0.23 -0.82Pseudo R2 4.00% Pseudo R2 3.44%No. of obs. 81 No. of obs. 177
C. FR BR follower, EJR BR leader D. EJR BR follower, FR BR leader EJR ΔBR t-1 0.52 1.33 EJR ΔBR t-1 0.11 0.60EJR ΔBR t-2 0.44 2.05 ** EJR ΔBR t-2 0.46 2.57 ***EJR ΔBR t-3 0.57 1.12 EJR ΔBR t-3 0.08 0.16EJR ΔBR t-4 -0.17 -0.36 EJR ΔBR t-4 0.61 2.06 ** EJR ΔBR t-5 0.47 2.99 *** EJR ΔBR t-5 0.43 1.53EJR ΔBR t-6 0.82 2.21 ** EJR ΔBR t-6 0.32 1.58FR ΔBR t-1 1.79 3.37 *** FR ΔBR t-1 0.17 1.23FR ΔBR t-2 -0.09 -0.18 FR ΔBR t-2 0.39 1.37FR ΔBR t-3 -0.12 -0.42 FR ΔBR t-3 -0.47 -1.24FR ΔBR t-4 0.23 0.14 FR ΔBR t-4 0.23 0.26FR ΔBR t-5 0.74 1.91 * FR ΔBR t-5 -2.57 -2.48 ** FR ΔBR t-6 0.25 0.57 FR ΔBR t-6 0.70 0.71Pseudo R2 11.89% Pseudo R2 6.58%No. of obs. 116 No. of obs. 159
Predictive ordered probit models are used to test lead-lag relations. ΔBR is a change in BR. Dependent
and independent variables take any of the following values: “-1,-2” for downgrades of one and more than
one notches respectively; “+1,+2” for upgrades of one or more notches respectively. The independent
variables are monthly lags up to 6 months (t-6). The remaining acronyms are defined in Table 3. SPR
ΔBR (t-6) is an empty set. “Coeffs”, “Z-val”, “SS” stand for coefficient, z-value and statistical
significance respectively. Huber-White robust standard errors are used with significance of 1% (***), 5%
(**) and 10% (*).
39
Table 5: Lead-Lag Relations Between Issuer-paid Bond Ratings and Financial Strength Ratings.
Independent Var. Coeffs Z-val. SS Independent Var. Coeffs Z-val. SS A. SPR FSR follower, SPR BR leader B. SPR BR follower, SPR FSR leader
SPR ΔBR t-1 0.02 0.07 SPR ΔBR t-1 -0.28 -0.68SPR ΔBR t-2 0.46 1.09 SPR ΔBR t-2 -0.29 -0.58SPR ΔBR t-3 0.29 0.95 SPR ΔBR t-3 -0.06 -0.14SPR ΔBR t-4 -0.58 -0.84 SPR ΔBR t-4 -1.62 -2.32 ** SPR ΔBR t-5 -0.18 -0.52 SPR ΔBR t-5 0.02 0.05SPR ΔBR t-6 -0.68 -0.74 SPR ΔBR t-6 -2.02 -1.35SPR ΔFSR t-1 -0.09 -0.47 SPR ΔFSR t-1 0.72 1.88 * SPR ΔFSR t-2 -0.09 -0.31 SPR ΔFSR t-2 0.35 0.92SPR ΔFSR t-3 0.05 0.28 SPR ΔFSR t-3 0.86 2.07 ** SPR ΔFSR t-4 0.08 0.32 SPR ΔFSR t-4 0.33 4.04 ***SPR ΔFSR t-5 -0.30 -1.09 SPR ΔFSR t-5 -0.22 -0.80SPR ΔFSR t-6 -0.27 -1.49 SPR ΔFSR t-6 0.60 0.89Pseudo R2 1.10% Pseudo R2 2.72%No. of obs. 494 No. of obs. 228
C. FR FSR follower, FR BR leader D. FR BR follower, FR FSR leader FR ΔBR t-1 0.83 1.94 * FR ΔBR t-1 0.26 0.67FR ΔBR t-2 -0.05 -0.09 FR ΔBR t-2 0.50 1.33FR ΔBR t-3 1.04 3.09 *** FR ΔBR t-3 0.96 2.91 ***FR ΔBR t-4 0.67 1.21 FR ΔBR t-4 -1.46 -1.65 * FR ΔBR t-5 -1.75 -3.21 *** FR ΔBR t-5 0.56 0.95FR ΔBR t-6 0.51 1.04 FR ΔBR t-6 -0.06 -0.10FR ΔFSR t-1 0.16 0.27 FR ΔFSR t-1 0.82 1.62FR ΔFSR t-2 -0.06 -0.15 FR ΔFSR t-2 0.25 0.57FR ΔFSR t-3 -0.78 -2.50 ** FR ΔFSR t-3 -0.08 -0.27FR ΔFSR t-4 -0.41 -1.11 FR ΔFSR t-4 1.73 2.58 ***FR ΔFSR t-5 1.48 3.06 *** FR ΔFSR t-5 0.43 1.41FR ΔFSR t-6 0.31 0.77 FR ΔFSR t-6 0.02 0.06Pseudo R2 8.84% Pseudo R2 7.82%No. of obs. 136 No. of obs. 139
Predictive ordered probit models are used to test lead-lag relations. ΔBR (ΔFSR) is a change in BR
(FSR). Dependent and independent variables take any of the following values: “-1,-2” for downgrades of
one and more than one notches respectively; “+1,+2” for upgrades of one or more notches respectively.
The independent variables are monthly lags up to 6 months (t-6). The remaining acronyms are defined in
Table 3. “Coeffs”, “Z-val”, “SS” stand for coefficient, z-value and statistical significance respectively.
Huber-White robust standard errors are used with significance of 1% (***), 5% (**) and 10% (*).
40
Table 6: Lead-Lag Relations Between Issuer-paid Financial Strength Ratings and Investor-paid Bond Ratings.
Independent Var. Coeffs Z-val. SS Independent Var. Coeffs Z-val. SS A. SPR FSR follower, EJR BR leader B. EJR BR follower, SPR FSR leader
EJR ΔBR t-1 -0.14 -1.08 EJR ΔBR t-1 0.10 0.39EJR ΔBR t-2 0.36 1.57 EJR ΔBR t-2 0.33 2.14 ** EJR ΔBR t-3 0.10 0.71 EJR ΔBR t-3 0.12 0.30EJR ΔBR t-4 0.33 2.12 ** EJR ΔBR t-4 0.41 0.94EJR ΔBR t-5 0.27 0.82 EJR ΔBR t-5 0.39 1.54EJR ΔBR t-6 0.19 0.31 EJR ΔBR t-6 0.51 2.25 ** SPR ΔFSR t-1 0.17 0.32 SPR ΔFSR t-1 -0.16 -0.75SPR ΔFSR t-2 -0.73 -1.89 * SPR ΔFSR t-2 -0.70 -3.27 ***SPR ΔFSR t-3 -0.49 -3.92 *** SPR ΔFSR t-3 -0.64 -0.73SPR ΔFSR t-4 0.10 0.15 SPR ΔFSR t-4 -0.80 -1.48SPR ΔFSR t-5&6 -0.73 -2.64 * SPR ΔFSR t-5 0.41 1.48
SPR ΔFSR t-6 0.07 0.17Pseudo R2 4.13% Pseudo R2 4.99%No. of obs. 124 No. of obs. 152
C. FR FSR follower, EJR BR leader D. EJR BR follower, FR FSR leader EJR ΔBR t-1 -0.25 -0.62 EJR ΔBR t-1 0.18 0.68EJR ΔBR t-2 0.36 2.35 ** EJR ΔBR t-2 0.34 1.45EJR ΔBR t-3 0.19 1.29 EJR ΔBR t-3 0.21 0.32EJR ΔBR t-4 0.47 1.14 EJR ΔBR t-4 0.80 1.76 * EJR ΔBR t-5&6 0.32 1.32 EJR ΔBR t-5 0.63 1.79 *
EJR ΔBR t-6 0.50 1.59FR ΔFSR t-1 -1.11 -5.47 *** FR ΔFSR t-1 -0.41 -1.07FR ΔFSR t-2 -1.17 -4.15 *** FR ΔFSR t-2 0.50 1.18FR ΔFSR t-3 -0.77 -2.70 *** FR ΔFSR t-3 -0.71 -1.51FR ΔFSR t-4 -0.73 -1.57 FR ΔFSR t-4 0.87 0.80FR ΔFSR t-5 0.48 0.85 FR ΔFSR t-5 -2.36 -2.40 ** FR ΔFSR t-6 -0.14 -0.21 FR ΔFSR t-6 -0.35 -0.57Pseudo R2 13.60% Pseudo R2 7.66%No. of obs. 77 No. of obs. 128
Predictive ordered probit models are used to test lead-lag relations. ΔBR (ΔFSR) is a change in BR
(FSR). Dependent and independent variables take any of the following values: “-1,-2” for downgrades of
one and more than one notches respectively; “+1,+2” for upgrades of one or more notches respectively.
The independent variables are monthly lags up to 6 months (t-6). SPR ΔFSR (t-5) and (t-6) are merged
together due to small number of observations. The remaining acronyms are defined in Table 3. “Coeffs”,
“Z-val”, “SS” stand for coefficient, z-value and statistical significance respectively. Huber-White robust
standard errors are used with significance of 1% (***), 5% (**) and 10% (*).
41
Table 7A: Event Studies to Downgrades in Bond Ratings by Investor-paid and Issuer-paid Rating Agencies.
Egan Jones (08/97-12/07) S&P (05/96-12/07) Fitch Ratings (00/09-12/07) Days CAAR PW
CAAR P
SS GSSS
CAAR PW CAAR
P SS
GSSS
MD SS CAAR PW CAAR
P SS
GSSS
MD SS
(0,0) -4.55% -3.92% *** *** -2.26% -1.36% *** * -2.29% ** -1.73% -1.48% *** -2.82% *** (-1,+1) -6.50% -5.76% *** *** -3.55% -1.98% *** -2.95% -2.52% -2.13% *** -3.98%(-2,+2) -7.44% -6.61% *** *** -5.17% -3.01% *** -2.27% -5.04% -4.75% *** -2.40%(-3,+3) -7.00% -6.21% *** *** -6.01% -3.45% *** * -0.99% -6.41% -5.88% *** ** -0.59% **
(-10,+10) -11.03% -10.35% *** *** -7.43% -5.15% *** -3.60% -8.01% -7.65% *** ** -3.02%(-30,+30) -16.77% -17.21% *** *** -9.79% -5.94% *** -6.98% * -11.47% -11.23% *** *** -5.30%Pre-Event (-30,-1) -8.31% -8.75% *** *** -8.67% -6.68% *** * 0.36% -12.35% -11.61% *** *** 4.04% (-20,-1) -6.36% -6.29% *** *** -5.76% -4.49% *** -0.60% -10.06% -9.44% *** *** 3.70% (-10,-1) -4.31% -4.26% *** *** -5.38% -4.69% *** * 1.07% -7.59% -7.50% *** *** 3.28% (-3,-3) 0.09% 0.06% -0.92% -0.87% *** 1.01% -1.45% -1.13% *** * 1.54% ** (-2,-2) -1.12% -1.04% *** *** -1.18% -0.77% *** 0.06% -2.57% -2.79% *** 1.45% (-1,-1) -1.52% -1.45% *** *** -0.74% -0.49% *** -0.78% -0.37% -0.45% ** -1.15% **
Post-Event (+1,+10) -2.17% -2.16% *** 0.21% 0.90% * ** -2.38% 1.31% 1.33% ** -3.48% ** (+1,+20) -2.77% -3.10% *** -1.06% 0.59% ** -1.71% 2.34% 2.12% ** -5.10% ** (+1,+30) -3.92% -4.54% *** 1.13% 2.09% ** ** -5.05% * 2.61% 1.87% ** ** -6.53% **
This table shows the results of event studies around the announcement of downgrades in BRs by the investor-paid agency (EJR) and the issuer-
paid agencies (S&P and Fitch). Sample size for Egan Jones Ratings is 107, for S&P is 54 and for Fitch Ratings is 56. Event windows are given in
the column “Days”. “CAAR” is the Cumulative Average Abnormal Return. “PW CAAR” is the precision weighted cumulative average abnormal
return. “PSS” is the Statistical Significance of the Patell (1976) test. “GSSS” is the Statistical Significance of the Generalized Sign test. “MD”
represents the mean difference with the respective event window from the sample of EJR. The sample of EJR shown has a BR by S&P (Results
are qualitatively the same to the sample of EJR matched with Fitch). The sample of S&P and Fitch also have a bond rating (BR) from EJR. The
level of significance is denoted by the following symbols: * implies 0.10, ** implies 0.05 and *** implies 0.01.
42
Table 7B: Event Studies to Upgrades in Bond Ratings by Investor-paid and Issuer-paid Rating Agencies.
Egan Jones (08/97-12/07) S&P (05/96-12/07) Fitch Ratings (00/09-12/07) Days CAAR PW
CAAR P
SS GSSS
CAAR PW CAAR
P SS
GSSS
MD SS CAAR PW CAAR
PSS
GSSS
MD SS
(0,0) 1.25% 1.21% *** ** 0.25% 0.31% * 0.99% ** 0.05% 0.15% 1.20% *** (-1,+1) 1.28% 1.27% *** *** 1.23% 1.31% *** *** 0.05% -0.33% -0.10% 1.61% *** (-2,+2) 1.75% 1.76% *** *** 0.38% 0.75% * * 1.37% * -0.58% -0.31% 2.33% *** (-3,+3) 1.36% 1.55% *** ** 0.51% 0.88% * 0.85% -1.03% -0.71% ** 2.39% ***
(-10,+10) 1.55% 1.56% ** 0.85% 1.49% * ** 0.70% -0.51% 0.13% 2.06% ** (-30,+30) 1.57% 1.53% -1.31% -0.19% 2.88% 0.35% 0.40% 1.22% Pre-Event (-30,-1) 1.44% 1.30% * 0.03% 0.78% 1.41% 0.94% 0.79% 0.50% (-20,-1) 0.20% 0.31% 0.23% 0.88% -0.03% -0.42% 0.07% 0.62% (-10,-1) 0.25% 0.23% 0.46% 0.93% -0.22% -0.33% 0.06% 0.57% (-3,-3) -0.25% -0.14% ** 0.01% -0.05% -0.26% -0.21% -0.22% * * -0.04% (-2,-2) 0.15% 0.18% ** -0.72% -0.52% ** * 0.87% *** -0.04% 0.03% 0.19% (-1,-1) 0.10% 0.11% 0.36% 0.46% ** ** -0.26% -0.49% -0.34% ** ** 0.60% *
Post-Event (+1,+10) 0.06% 0.12% 0.14% 0.25% -0.08% -0.23% -0.08% 0.29% (+1,+20) -0.67% -0.79% ** -0.37% -0.38% -0.30% -0.63% -0.53% * -0.04% (+1,+30) -1.12% -0.97% ** -1.59% -1.28% 0.48% -0.64% -0.53% -0.48%
This table shows the results of event studies around the announcement of downgrades in BRs by the investor-paid agency (EJR) and the issuer-
paid agencies (S&P and Fitch). Sample size for Egan Jones Ratings is 68, for S&P is 29 and for Fitch Ratings is 55. Event windows are given in
the column “Days”. “CAAR” is the Cumulative Average Abnormal Return. “PW CAAR” is the precision weighted cumulative average abnormal
return. “PSS” is the Statistical Significance of the Patell (1976) test. “GSSS” is the Statistical Significance of the Generalized Sign test. “MD”
represents the mean difference with the respective event window from the sample of EJR. The sample of EJR shown has a BR by S&P (Results
are qualitatively the same to the sample of EJR matched with Fitch). The sample of S&P and Fitch also have a bond rating (BR) from EJR. The
level of significance is denoted by the following symbols: * implies 0.10, ** implies 0.05 and *** implies 0.01.
43
Table 8: Post-Announcement Drift of Issuer-paid Agency’s Downgrades
Egan Jones Followed Egan Jones not Followed Days CAAR PW
CAAR P
SS GS SS
CAAR PW CAAR
P SS
GS SS
MD SS
(+1,+10) -3.70% -3.86% *** -1.26% -1.12% ** -2.44% (+1,+20) -8.37% -8.27% *** * 0.57% 0.05% -8.94% ** (+1,+30) -10.62% -11.65% *** ** 0.08% -0.19% -10.70% ** (+1,+40) -8.76% -10.26% *** * -0.61% -1.03% -8.15% * (+1,+50) -7.44% -8.69% *** * -0.80% -1.48% -6.64% (+1,+60) -4.72% -7.67% *** 0.66% 0.17% * -5.39%
This table shows the results of event studies on the sample of downgrades by the issuer-paid rating agency. “Egan Jones Followed” comprises all
downgrades by Egan Jones that are followed by other downgrades from S&P, Fitch and Egan Jones, in the next ninety days (n=40). “Egan Jones
not Followed” comprises all downgrades by Egan Jones minus “Egan Jones Followed” (n=67). Event windows are given in the column “Days”.
“CAAR” is the Cumulative Average Abnormal Return. “PW CAAR” is the precision weighted cumulative average abnormal return. “PSS” is the
Statistical Significance of the Patell (1976) test. “GSSS” is the Statistical Significance of the Generalized Sign test. “MD” is the Mean Difference
of the “Egan Jones Followed” – “Egan Jones Not Followed”. “SS” is the one sided statistical significance of MD. The level of significance is
denoted by the following symbols: * implies 0.10, ** implies 0.05 and *** implies 0.01.
44
Appendix Figure A1: Difference in Stock Price Changes between the U.S. Insurance Industry and U.S. Market Indices (1/1/1997 - 21/12/2007)
Source: SNL Financial website (2011): www.snl.com
0.00
50.00
100.00
150.00
200.00
250.00
300.00
350.00
400.00
Jan-96 Jan-97 Jan-98 Jan-99 Jan-00 Jan-01 Jan-02 Jan-03 Jan-04 Jan-05 Jan-06 Jan-07 Jan-08
SNL U.S. Insurance DJIA S&P 500
45
Table A1: Lead-lag Between Issuer-paid Changes in Financial Strength Ratings and Bond Ratings (up to 12/2009) Independent Var. Coeffs Z-val. SS Independent Var. Coeffs Z-val. SS
A. S&P FSR follower, S&P BR leader B. S&P BR follower, S&P FSR leader S&P ΔBR t-1 -0.03 -0.12 S&P ΔBR t-1 -0.12 -0.41S&P ΔBR t-2 0.56 1.74 * S&P ΔBR t-2 -0.18 -0.48S&P ΔBR t-3 0.63 2.77 *** S&P ΔBR t-3 -0.11 -0.4S&P ΔBR t-4 0.57 1.84 * S&P ΔBR t-4 0.92 2.81 *** S&P ΔBR t-5 0.51 1.61 S&P ΔBR t-5 0.49 1.3S&P ΔBR t-6 0.54 1.28 S&P ΔBR t-6 -0.11 -0.32S&P ΔFSR t-1 -0.12 -0.65 S&P ΔFSR t-1 0.13 0.32S&P ΔFSR t-2 -0.09 -0.32 S&P ΔFSR t-2 0.27 0.81S&P ΔFSR t-3 0.02 0.13 S&P ΔFSR t-3 0.91 2.91 *** S&P ΔFSR t-4 0.02 0.10 S&P ΔFSR t-4 -0.56 -1.63S&P ΔFSR t-5 -0.25 -0.97 S&P ΔFSR t-5 -0.18 -0.54S&P ΔFSR t-6 -0.31 -1.70 * S&P ΔFSR t-6 0.32 0.87Pseudo R2 2.62% Pseudo R2 3.20%No. of obs. 662 No. of obs. 424
C. Fitch FSR follower, Fitch BR leader D . Fitch BR follower, Fitch FSR leader Fitch ΔBR t-1 1.07 2.73 *** Fitch ΔFSR t-1 1.19 4.14 *** Fitch ΔBR t-2 -0.35 -0.81 Fitch ΔFSR t-2 0.3 0.95Fitch ΔBR t-3 1.35 2.88 *** Fitch ΔFSR t-3 -0.03 -0.22Fitch ΔBR t-4 0.63 1.52 Fitch ΔFSR t-4 1.37 3.99 *** Fitch ΔBR t-5 -0.57 -0.79 Fitch ΔFSR t-5 0.13 0.46Fitch ΔBR t-6 0.74 2.05 ** Fitch ΔFSR t-6 0.14 0.5Fitch ΔFSR t-1 0.1 0.20 Fitch ΔBR t-1 0.41 1.43Fitch ΔFSR t-2 -0.05 -0.13 Fitch ΔBR t-2 0.2 0.84Fitch ΔFSR t-3 -0.73 -2.30 ** Fitch ΔBR t-3 1.14 3.05 *** Fitch ΔFSR t-4 -0.24 -0.68 Fitch ΔBR t-4 -0.89 -1.86 * Fitch ΔFSR t-5 1.39 2.53 ** Fitch ΔBR t-5 0.63 1.84 * Fitch ΔFSR t-6 0.3 1.11 Fitch ΔBR t-6 0.45 1.23Pseudo R2 9.44% Pseudo R2 7.95%No. of obs. 215 No. of obs. 217
Predictive ordered probit models are used to test lead-lag relations. ΔBR (ΔFSR) is a change in BR
(FSR). Dependent and independent variables take any of the following values: “-1,-2” for downgrades of
one and more than one notches respectively; “+1,+2” for upgrades of one or more notches respectively.
The independent variables are monthly lags up to 6 months (t-6). The remaining acronyms are defined in
Table 3. “Coeffs”, “Z-val”, “SS” stand for coefficient, z-value and statistical significance respectively.
Huber-White robust standard errors are used with significance of 1% (***), 5% (**) and 10% (*).
Because of small number of observations: Fitch ΔFSR t-1 and t-2 are merged; Fitch ΔFSR t-5 and t-6 are
merged. S&P’s (Fitch’s) sample extends from May 1996 (January 2000) to December 2009.
46
Table A2: Market Reactions to Changes in Financial Strength Ratings by Issuer-paid Rating Agencies. Panel A DFSRs by SPR (05/96-12/07) DFSRs by FR (00/09-12/07)
Days CAAR PW CAAR
P SS
GS SS
MD SS CAAR PW CAAR
P SS
GS SS
MD SS
(0,0) -2.01% -1.14% *** ** -2.53% ** -2.15% -1.27% *** ** -2.39% (-1,+1) -1.38% -0.39% -5.12% *** -3.02% -2.01% *** -3.48% ** (-2,+2) -2.70% -1.56% *** -4.73% *** -4.48% -2.94% *** * -2.96% * (-3,+3) -3.67% -2.07% *** -3.33% ** -5.08% -3.36% *** ** -1.92%
(-10,+10) -5.92% -4.09% *** * -5.11% ** -4.06% -3.13% *** -6.97% ** (-30,+30) -9.06% -5.43% *** ** -7.71% ** -6.87% -7.03% *** ** -9.90% ** Pre-Event (-30,-1) -8.79% -6.83% *** ** 0.48% -11.78% -12.26% *** ** 3.47% (-20,-1) -6.15% -4.86% *** -0.21% -7.82% -7.28% *** * 1.46% (-10,-1) -4.03% -3.40% *** *** -0.28% -6.18% -5.71% *** 1.87% (-3,-3) -1.07% -0.83% *** ** 1.16% ** -0.71% -0.81% *** 0.80% (-2,-2) -1.32% -1.38% *** 0.21% -2.31% -2.11% *** ** 1.20% (-1,-1) 0.87% 1.10% *** -2.40% *** -2.53% -2.44% *** 1.01%
Post-Event (+1,+10) 0.10% 0.45% -2.27% * 4.28% 3.85% *** -6.45% *** (+1,+20) 0.14% 1.04% * -2.91% 6.78% 7.29% *** ** -9.54% *** (+1,+30) 1.72% 2.54% *** -5.63% ** 7.06% 6.50% *** * -10.98% *** (+1,+40) 1.74% 2.53% ** ** -5.40% ** 5.23% 3.85% *** -8.89% **
47
Panel B UFSRs by SPR (05/96-12/07) UFSRs by FR (00/09-12/07) Days CAAR PW
CAAR P
SS GS SS
MD SS CAAR PW CAAR
P SS
GS SS
MD SS
(0,0) 0.03% 0.17% 1.21% *** -0.39% 0.07% 1.64% ** (-1,+1) 0.26% 0.40% 1.01% * 0.34% 0.36% 0.94% * (-2,+2) 0.54% 0.49% 1.21% -0.16% 0.03% 1.91% ** (-3,+3) 0.96% 1.33% ** ** 0.39% -0.78% -0.87% 2.14% **
(-10,+10) -0.39% 0.56% 1.94% 3.18% 1.62% ** -1.63% (-30,+30) -3.70% -1.48% 5.27% * -0.10% -2.40% 1.66% Pre-Event (-30,-1) -0.55% 0.68% 1.99% 2.09% 0.18% * -0.65% (-20,-1) -0.29% 0.95% 0.49% 3.20% 1.05% -3.00% ** (-10,-1) 0.54% 0.97% -0.30% 3.00% 1.12% -2.75% * (-3,-3) 0.38% 0.62% *** * -0.63% ** -0.39% -0.62% ** *** 0.14% (-2,-2) -0.03% -0.25% 0.18% -0.24% -0.23% 0.38% (-1,-1) 0.15% 0.10% -0.05% -0.17% -0.13% 0.27%
Post-Event (+1,+10) -0.97% -0.59% 1.03% 0.57% 0.43% -0.51% (+1,+20) -1.70% -1.02% * 1.02% -0.13% -0.82% -0.54% (+1,+30) -3.18% -2.34% ** 2.07% -1.79% -2.65% * 0.68% (+1,+40) -2.39% -1.70% 0.66% -1.32% -2.65% * -0.41%
This table shows event studies around the announcement of changes in FSRs by issuer-paid rating agencies (S&P and Fitch). Event windows are
given in the column “Days”. DFSRs (UFSRs) are downgrades (upgrades) in FSRs. Sample size is 87, 48, 33 and 22 for S&P’s DFSR, Fitch’s
DFSR, S&P’s UFSR, Fitch’s UFSR. Mean CAARs are the Cumulative Average Abnormal Return. PW CAAR is the precision weighted
cumulative average abnormal return. PSS is the Statistical Significance of the Patell (1976) test. GSSS is the Statistical Significance of the
Generalized Sign test (negative for downgrades; positive for upgrades). “MD” represents the mean difference with the respective event window
from the sample of EJR (table 5). The sample of EJR shown has a BR by S&P (Results are qualitatively the same to the sample of EJR matched
with Fitch). The sample of S&P and Fitch also have a bond rating (BR) from EJR. The level of significance is denoted by the following symbols:
* implies 0.10, ** implies 0.05 and *** implies 0.01 using a one-tail test. The sample of S&P and Fitch, is chosen to have both a BR and an FSR
by the same credit rating agency. There are no qualitative differences in event studies conducted with other sample sizes.
Top Related