RUNNING HEAD: Content analysis in public relations 1
“Insuring” public relations success: Content analysis as a
benchmark for evaluating media outreach
Melissa Cibelli
University at Albany
RUNNING HEAD: Content analysis in public relations 2
Abstract
Public relations is seen as the business-oriented field within the realm
of communication. As a result, it is a field often subjected to the same
standards as other business sectors, particularly standards of performance
measurement.
However, as a field with results that have highly subjective
interpretations, the problem of assessment becomes obvious. How can it be
determined which organizational goals public relations efforts are meeting?
How can one measure intangible outcomes, like reputation?
Existing methods for evaluating success in the field are varied and are
a common topic of controversy in communication journals. Though there is
much disagreement on how exactly measurement should be done, there is
general agreement that developing metrics for assessment will give the
profession greater validity and help evaluate the effectiveness of specific
strategies.
In the following study, the media relations program of the trade
association Professional Insurance Agents of New York State Inc. was
reviewed and a relevant content-analysis tool was developed for the
organization’s use. From this instrument, an initial, benchmark analysis was
conducted that will serve as the historical reference point for future media
evaluation. This paper details the rationale for the creation of this
RUNNING HEAD: Content analysis in public relations 3
measurement tool, as well as the results from the initial analysis of public
relations outcomes.
“Insuring” public relations success: Content analysis as a
benchmark for evaluating outreach campaigns
Public relations is a type of applied communication, thus it makes
sense to frame its study from the perspective of a business enterprise.
(Botan & Taylor, 2004) It is a field of function and pragmatic practices, a
tool to reach organizational ends. (Botan & Taylor, 2004) Common goals of
public relations include: image improvement, development of a higher
profile, changing of public attitudes, enhancing community relations,
increasing market share, influencing public policies, solidifying investor
relations and driving industrial relationships. (Bland, Theaker, & Wragg,
2005)
Being not an obviously quantitative field, like accounting or even
sales, there is no clearly evident report or set of figures that can be easily
produced in order to determine the value of an organization’s
communications program. Methods for evaluating success in the field are
varied and are a common topic of discussion and controversy in
communication journals. In fact, in the next few years, public relations is
RUNNING HEAD: Content analysis in public relations 4
poised to become one of the most researched areas of communication.
(Botan & Taylor, 2004)
With sales, one can measure end-of-year figures against a set of
objectives at the beginning of the year. But public relations is much more
nebulous; how can you measure the goal of increased visibility and
awareness? How can you track all the instances that a consumer recalled
your brand as a result of some positive press he or she read in the local
newspaper? Such black-and-white, A-to-B type measurement is not just
impractical, but also, nearly impossible to conduct in this field.
Though there is much disagreement on how to measure, there is one
issue on which almost all academics and practitioners seem to agree: it
needs to be measured. Historically, though clip collecting and media
monitoring, tracking volume was the ultimate determinate of success in the
field—quantity was seen as better than quality. (Bland, Theaker, & Wragg,
2005) However, practitioners have realized what Baikaltseva and
Leinemann (2004) have asserted—just a collection of clips is a somewhat
vain, and, ultimately, ineffective process. Instead, today's professionals are
searching for something more, seeking out metrics to analyze public
relations outcomes to not only give validity to the field in the business
world, but also to help practitioners assess the effectiveness of their efforts.
With the pressure for accountability mounting, public relations
professionals must demonstrate that their work activities actually help
RUNNING HEAD: Content analysis in public relations 5
achieve meaningful goals for the organization, or their clients. (Hon, 1998)
The research that follows provides an example of how practitioners in
the field can develop a measurement and evaluation scheme that best suits
the media relations program specific to one organization, using a specific
company as an example. This study will establish what is known as a
benchmark analysis of coverage, from which future assessments of
effectiveness and impact can be based.
Why measure public relations?
Before explaining the best practices for measuring and evaluating
public relations outcomes, it is important to understand why it is even
necessary undertake such an enterprise. There exists a chorus of expert
commentary on this very subject. The majority of opinion is in line with the
thoughts of Hon (1998), that through evaluation, practitioners can actually
demonstrate how public relations achieves organizational goals, directly or
indirectly. As Baikaltseva and Leinemann (2004) noted, “You can only
manage what you measure.”
Unfortunately, as mentioned earlier, public relations is not a field that
lends itself easily to measurement. Many see the field similar to figure
skating—something beautiful, easy to see when it is good or bad. But,
ultimately, it is an endeavor that is entirely subjective, based on individual
judges' opinions. (Baikaltseva & Leinemann, 2004) That does not mean that
RUNNING HEAD: Content analysis in public relations 6
measurement and evaluation are fruitless enterprises—it seems that
management knows what public relations is worth on some deeper level.
(Hon, 1998) Yet still, it remains intangible. For this reason, it is up to the
professionals to translate its value for management. A practitioner's
intuitive “sixth sense” is not enough. (Burton, 1966)
Explaining the necessity of a public relations function can be very
practical. After all, it provides justification for the very existence of such a
department. Marketing communication is often the first area to get cut from
the budget, simply because there are no immediately observable effects on
sales or profit. (Baikaltseva & Leinemann, 2004) Geduldig (as cited in Hon,
1998) puts it like this,
“A hard-nosed manager would have a tough job evaluating a function that cannot be defined and can do well when it does nothing … Don’t expect others to buy public relations on faith. If public relations doesn’t set standards of measurement that are both objective and meaningful, management will apply its own, and the value of public relations will ultimately be measured against the bottom line.”
(p. 6)
Proving your worth is no longer as simple as showing evidence of volume or
claiming public relations and reputation evaluation to be intangible and not
subject to measure—managers are demanding quantifiable results of
practitioners. (Bland, Theaker, & Wragg, 2005)
Measurement and evaluation does actually have a greater, nobler
purpose than job security for public relations. It helps determine whether or
not professionals are meeting the objectives set for communicative efforts.
RUNNING HEAD: Content analysis in public relations 7
Through assessment, information is collected to better complete work-
related tasks, benchmark data on audience views, monitor important or new
programs, evaluate effectiveness and ultimately plan and refine public
relations campaigns. (Lindenmann, 2006) The evaluation helps determine if
awareness is being created and is particularly valuable when sought in
comparison to a competitor, to see who is getting the most benefit from
media relations. (Baikaltseva & Leinemann, 2004) Consider that research
has found that a strong relationship exists between media coverage and
business results. (Jeffrey, Michaelson, & Stacks, 2006)
Further, measurement can facilitate the perpetual cycle of planning
and refining—after all, what good is assessment if you do not learn from
your mistakes? (Baikaltseva & Leinemann, 2004) This approach to research
is known as scientific management and is supported by a number of studies.
(Broom & Dozier, 1990; Bland, Theaker, & Wragg, 2005) This method helps
track a campaign before and after it is implemented, from start to finish,
and in a cyclical matter. First, research defines the problem; second, the
implementation is monitored while appropriate adjustments are made; and
finally, the impact is measured against the objectives of the campaign.
(Broom & Dozier, 1990)
Measurement tactics in public relations have been around for more
than 60 years. (Lindenmann, 2005) Unfortunately, it has been traditionally
restricted to counting and ranking media coverage; really, it is changes in
RUNNING HEAD: Content analysis in public relations 8
behavior and knowledge that are the most valuable of measures of
effectiveness. (Austin & Pinkleton, 2006) In fact, 84 percent of public
relations practitioners cited clip counting as their main method for
assessing results. (Jeffrey, Michaelson, & Stacks, 2006) Though more
frequent mentions cannot be directly tied to business outcomes, there is no
doubt that the more often an organization is mentioned, the more likely it
will be noticed. (Carney, 1972) For this reason, clip counting and tracking
has remained a key method of assessment. But in order to properly assess
the ultimate outgrowths of media relations campaigns using clip
monitoring, a starting point must be achieved.
Benchmarking: The beginning of effective measurement
Benchmarking is a type of measurement technique that requires an
organization to take a look at their practices, the practices of other
competitors and then compare the practices in order to make future
assessments of work. (Stacks, 2006) The process involves creating a base—
or benchmark—that is used to improve future efforts and media coverage,
taking into account the aspects of media coverage that are most important
to the company. (Austin & Pinkleton, 2006) When conducting such an
analysis in the field of public relations as a whole, it is often called a
communication audit, specifically referring to a systematic review of how an
organization communicates with its stakeholders and audiences. (Stacks,
RUNNING HEAD: Content analysis in public relations 9
2006)
So why benchmark? Because it is the best starting point for
quantifying media placements in a valuable way. It is the first step towards
a more systematic type of measurement and ongoing evaluation of public
relations efforts. (Hon, 1998) After all, just because you earn publicity in
the media, it does not mean that you have really effectively shared a
message—a benchmark analysis is the initial mark from which future
evaluation can be conducted.
Furthermore, the typical, one-dimensional measurement of public
relations practices tells us nothing meaningful—you need to compare
yourself to a competitor or to a benchmark in order to actually have value.
(Fraser, 2002) You need something to measure your efforts against, which is
why it is so important to consider the work of key competitors. (Baikaltseva
& Leinemann, 2004; Jeffrey, Michaelson, & Stacks, 2006) By conducting a
benchmark analysis and comparing your organization's public relations
outcomes to those of a competitor, you embrace the truly cyclical nature of
the field and can more effectively set objectives for future campaigns.
(Baikaltseva & Leinemann, 2004)
RUNNING HEAD: Content analysis in public relations 10
Figure 1. The public relations process. From Baikaltseva & Leinemann, 2004, p. 7.
Situation analysis: Professional Insurance Agents of New York
The Professional Insurance Agents of New York State Inc. (PIANY) is a
trade association representing professional, independent property/casualty
insurance agencies, brokerages and their employees throughout the state.
Its goal is to provide member agents with all the benefits they need to
better run their businesses, including: a focused legislative voice through
lobby efforts, information on the latest developments in the industry, timely
education and certifications, networking opportunities and more. (“About
PIA,” 2010)
PIANY's media relations strategy is heavily focused on trade
RUNNING HEAD: Content analysis in public relations 11
publications, specifically weekly and monthly magazines dedicated to
coverage the property/casualty insurance industry. Effective publication
targeting is key to successful public relations efforts and the association
clearly understands this; as a result they work with a small, focused group
of reporters and editors. (Bland, Theaker, & Wragg, 2005)
Currently, the communications staff at PIANY engages in virtually no
public relations evaluation and instead relies on clip counting to determine
the association's reach. Though this may be an effective tracking method, it
does not provide the complete picture of media reach that other types of
analysis might offer.
In addition, PIANY faces another challenge—a direct competitor
association exists, the Independent Insurance Agents and Brokers of New
York (IIABNY). However, the existence of this organization provides a
unique opportunity for comparative competitive analysis between the two
groups, to help determine which association is most effective at obtaining
media coverage in the trade press.
The goal of this study is to conduct a benchmark analysis of media
coverage in specific trade publications for PIANY. This will be accomplished
by (1) assessing the trade media landscape for the target publications of the
association and (2) developing a relevant coding system to analyze the
content of these publications. In tandem, PIANY's coverage will be
RUNNING HEAD: Content analysis in public relations 12
compared to that of the IIABNY, specifically looking at the type and tone of
coverage (to be elaborated on in the Methods section). The resulting data
and coding system will provide a base for future use in media relations
tracking to help assess the output of PIANY's public relations efforts.
Methods
For this project, a content analysis is the most suitable method for
establishing a benchmark point from which to conduct further evaluative
research in the future. This is because the content analysis is more than just
the basic, informal clip gathering, resulting in a measurement of just
outputs and not outcomes. (Austin & Pinkleton, 2006) Content analysis
makes communication content into something more quantitative and
numerical by transforming it from anecdotal, subjective information into
data that is systematic and countable. This is the case because rather than
being based on informal observations of a researcher, the data is seen
through the lens of a pre-developed coding scheme, with specific numerical
classifications that can be subject to statistical analysis. (Austin &
Pinkleton, 2006)
The reason for the move to a more quantitative analysis is clear: it
makes public relations more tangible, more quantifiable. As Lindenmann
(2006) noted, it “help[s] provide better analysis of communication and
marketing efforts, as you have reliable numbers to substantiate any changes
RUNNING HEAD: Content analysis in public relations 13
that have taken place.” (p. 12)
Though content analysis by itself can only describe communication,
not evaluate, by using defined criteria and objectives, the results of one
study can be compared to another and determine if goals have been met.
(Berelson, 1971) Regular content analysis can be a form of public relations
effectiveness evaluation, providing the historical record from which future
decision-making can be guided. (McLellan & Porter, n.d.) Without looking at
past performance, a public relations practitioner would be unable to
determine what strategies have been the most and least effective in getting
through to a targeted publication or audience. Dr. Walter Lindenmann
developed the theoretical yardstick that serves as the rationale for content
analysis as an appropriate measurement model for public relations. He
advocated for an analysis of outputs, the impressions or actual placements
(total number of stories, etc.), outgrowths, assessment of audience
understanding of shared content, and ultimately outcomes, the changed
behaviors of the target audience. (Baikaltseva & Leinemann, 2004) Though
this project will not address the outgrowths and outcomes of public
relations efforts by the studied associations, it will take a look at outputs,
providing a starting point for future evaluation.
Sample
A good sample must be reflective of the overall content it attempts
RUNNING HEAD: Content analysis in public relations 14
to represent. The sample must be comprehensive and logical for the
intended goal of the study, but, at the same time, remaining manageable for
research purposes. (Austin & Pinkleton, 2006) The content to be assessed
includes one year's worth of publications from three prominent weekly and
biweekly insurance industry trade publications (to be described in further
detail). This amount of material was quite enormous, so quota sampling
was employed to narrow down the selection. As such, the sample was
picked to meet a specific “quota” and selection ended when that quota was
met, attempting to be representative of the general distribution of all
considered content. (Stacks, 2006) When considering weekly publications,
the cycle of content is not as impactful as when looking at daily newspapers;
you do not need to worry about one issue suddenly dominating a news cycle
as much. (Fico, Lacy, & Riffe,1998) The strategy used was a modification of
Fico, Lacy, and Riffe's (1998) recommended “representative year” method:
take one issue from each month to analyze in order to create an accurate
representation of a news year. However, because the sample included both
weekly and biweekly publications, this method was impractical. Instead, I
opted to select three issues, one of each publication, from a given month at
random, from January 2009 to December 2009, resulting in a total of 36
magazines in the sample.
The unit of analysis for this study was the article, which comprised a
range of things, including columns, features, Q&As, etc., to be assessed
RUNNING HEAD: Content analysis in public relations 15
within the framework of the coding scheme. (Austin & Pinkleton, 2006) A
pilot study was conducted, as recommended by Broom and Dozier (1990), in
order to develop an appropriate sample and coding scheme. As a result, the
definition of the unit of analysis “article” became specific to the following:
Feature/cover story: A story either featured on the cover of
the magazine, or part of the focal topic of the issue.
Column: A regular or semi-regular piece, often on industry
trends and events, written by an expert.
Information/announcements: Usually straight reporting, but
on minor announcements, such as personnel, awards, etc.
News report: Straight reporting on a current event, with
little-to-no editorializing.
Editorial/opinion: Typically written by the publication’s editor,
at the beginning of an issue.
Letter to the editor: Correspondence submitted to the editor
regarding the most recent issue of the magazine.
For this analysis, determining what length constituted an “article” was
straightforward, as each publication had defined beginnings and endings for
a given piece; I used the individual publication’s standards as a guide.
Before coding, each issue in the sample was divvied up and pre-
analyzed, looking for number and type of articles prevalent, searching
specifically for pieces that mentioned either PIANY or IIABNY. Out of a total
RUNNING HEAD: Content analysis in public relations 16
of 615 articles, 46 were included in the sample, based on a mention of
either association. Mentions of parent associations (i.e., PIA National,
Independent Insurance Agents and Brokers of America) and sister
associations (i.e., Massachusetts Independent Agents Association,
Professional Insurance Agent of Connecticut) were not considered in the
study.
The three publications included in the sample were National
Underwriter Property and Casualty, Insurance Journal (East edition), and
Insurance Advocate. They were selected for their importance to PIANY as
media targeted to the association’s members and also for their varying
geographic and circulation reach.
Large-scale, national publication: National Underwriter
Property and Casualty is the most popular trade magazine
serving the property/casualty insurance industry in the United
States. With a circulation of more than 74,000 and a pass-along
readership of 116,300 insurance agents and brokers, it is the
leading weekly publication in the industry and is the foremost
expert in commercial and personal lines news. (Summit Media,
2010)
Mid-market, regional magazine: Insurance Journal is the most
widely read trade publication of independent insurance agents.
Though it has a national, biweekly circulation of 42,021, for the
RUNNING HEAD: Content analysis in public relations 17
purposes of this study, I chose to focus on the East Coast edition,
which covers news from Maine to Virginia. The East Edition
circulation stands at 9,545. The magazine provides regional,
national, and international news to industry professionals.
(Insurance Journal, 2010)
Local, state-specific coverage: Insurance Advocate is a
leading trade publication for insurance professionals in New
York state, New Jersey, and Connecticut. It is a biweekly
magazine with a circulation of 5,300 insurance agents, company
executives, and other professionals in the industry. Topics
covered include new and niche markets, legislative issues, and
industry developments. (Insurance Advocate, 2010)
Coding system
The importance of creating a specific and well-defined coding system
cannot be underestimated. Coding rules reduce the interpretation needed
by individual coders and are essential for creating an acceptable level of
reliability in any content analysis research. (Clegg Smith et. al., 2002) On a
very basic level, a coding scheme must include descriptive variables of the
piece to be analyzed—a study-specific identification number for reference,
RUNNING HEAD: Content analysis in public relations 18
date of publication, and source. The remaining categories are dependent on
the content of the actual database, but have three things in common
regardless of the particulars of the study—they must be mutually exclusive,
exhaustive, and reliable among multiple coders. (Austin & Pinkleton, 2006;
Holsti, 1969) With this in mind, I performed a pilot test, as recommended by
Broom and Dozier (1990), whereby I began examining a handful of content
in the sample to determine the best categories to consider for the final
coding scheme.
First, I opted to include a variety of descriptive information in the
scheme. Included were a given identification number for study (simply
numbering the publications in order of analysis) and several particulars
about the publication the content was found in: title, date of publication.
These were adopted from the recommendations of Austin and Pinkleton
(2006) as well as for reasons of practicality.
Next, the actual subject and context of the piece was dissected:
general article subject, any secondary topics of importance, mentioned
associations (PIANY, IIABNY, both), and type of piece (feature, news report,
editorial, letter, column, information, etc.). Again, Austin and Pinkleton's
(2006), suggestions provided the framework for this piece of the scheme.
The pilot test resulted in the following topic categories: personal lines,
commercial lines, legislation, regulation, event-specific coverage, education,
technology, carrier relations, and personnel. Several secondary topics were
RUNNING HEAD: Content analysis in public relations 19
found to be of interest, including: producer compensation disclosure,
automobile insurance, health care, the economy, lobbying activities, and
excess/surplus lines.
The more difficult part of developing this coding scheme came about
when considering variables to assess. Some tend to be quite subjective; to
avoid this issue, definitive rules must be established prior to coding, giving
clear guidelines to the participating coders in order to achieve as much
objectivity as possible. (Holsti, 1969)
One variable considered was prominence, referring to the placement
of a message in an article, or simply the placement of a piece within a
publication. (Holsti, 1969; Williams, 2009) The more prominent a piece, the
more likely it is to be read by the news consumer—we tend to read articles
that grasp our attention right away. It can refer to many things, including
the size of an article, inclusion of an image with a piece, mention of
association in a headline, etc. For this study, prominence was tallied in a
number ways, including article location (those towards the front/center are
more likely to be read than those in the back), inclusion of an image (to
grab audience attention), inclusion of an expert quote (and how many),
headline mention, cover mention, order of association mention (first
mention getting a greater weight), and finally, article length in pages
(longer articles being more valuable than shorter blurbs).
Even more difficult to code for is article tone; that is, is the content
RUNNING HEAD: Content analysis in public relations 20
favorable, unfavorable or neutral to a given party? (Michaelson, 2005) Tone
is extremely important to measure in looking at public relations outcomes
because it can help assess how a target audience feels about a given client
or product; a favorable tone can provide significant long-term benefits for
an organization. (Carroll, 2009; Williams, 2009) Tone, often called bias, can
be viewed as the overall attention given to an organization, the value
judgment an author has towards a company or even the general approach to
a specific subject matter. (Carney, 1972) Over time, tone can be assessed
longitudinally in terms of media favorability, or the overall view of a
company resulting from a stream of stories, the ultimate goal of conducting
a benchmark analysis. (Carroll, 2009) This favorability, or lack of, can help a
company guide its future media relations activities, hence why tone is so
important to consider in any evaluation.
Tone can be looked at in a variety of different lenses and coding
systems. Brunken (2006) suggests observing tone on a six interval scale:
good/bad, positive/negative, wise/foolish, valuable/worthless,
favorable/unfavorable and acceptable/unacceptable. Baikaltseva and
Leinemann (2004) are advocates of the weighted slugging average, a scale
that measures tone from one to 100 and overall sentiment from negative
five to positive five. However, even the researchers acknowledge that this
can leave tone to a highly subjective and unreliable assessment.
To increase reliability, tone, like all aspects of the coding scheme,
RUNNING HEAD: Content analysis in public relations 21
must have very clearly defined ground rules in order to have any credibility.
(Austin & Pinkleton, 2006) But since tone is inherently subjective, this
proves substantially difficult. Determining the more extreme viewpoints,
such as extremely negative or absolutely neutral, are fairly recognizable; it
is the in between measures that are the most difficult to filter. (Carney,
1972) Therefore, I have considered a set of standards for coders to assess
tone from Williams (2009), who posits four key ways that tone of an article
can be settled on:
“Determine words or phrases that should be present for a clip to
be considered positive or negative;
Decide subject matter areas that should always be either
positive or negative;
Determine whether quotations from certain people quoted
would make a clip either positive or negative; and
Answer the question, 'Does the clip make it more or less likely
that the reader will do business with our organization?'” (p. 6)
Though not perfect, these standards provide a good framework for
determining article tone. In the vein, tone provides a basis from which to
further assess the articles in question, and can be deemed reliable so long
as the scale of evaluation remains consistent. (Holsti, 1969)
Fairness and Accuracy in Reporting (2001), the national media
RUNNING HEAD: Content analysis in public relations 22
watchdog group, suggests a number of similar criteria for assessing tone as
Williams and Holsti, but takes it a step further by posing the following:
What type of language is used? The terms selected by the
media are not coincidence, but often buzzwords selected to
shape public opinion. Are negative or positive words associated
with a company’s actions?
What context is the news presented in, if any? Particularly
with negative information, is the story presented with the
appropriate context explained, or is the issue left to audience
interpretation?
Considering the suggestions of FAIR and the other mentioned researchers,
coders determined tone on a one-to-five scale of intensity (1=extremely
positive, 2=somewhat positive, 3=neutral, 4=somewhat negative,
5=extremely negative) based on the described criteria to answer the
question, “What is the ultimate impact on the audience of this article?”
Further, coders will assess the frame within which the news is presented
(determining journalist’s inherent bias on the reported issue on the whole, if
any). Issues will be labeled as primarily positive or negative, assessed
through author's language use.
In the same vein as tone, frame will also be considered, looking at the
question of how the author is presenting the issue. Is it an issue that
automatically is considered positive or negative? What preconceived notions
RUNNING HEAD: Content analysis in public relations 23
are being brought in by the author?
In content analysis, to have any credibility as a truly quantitative and
objective measure, standards of reliability and validity must be
implemented. (Holsti, 1969) Reliability helps to remove human bias in
coding, by confirming that categories are consistent across multiple coders.
(Fico, Lacy, & Riffe, 1998) Because of the scope and time constraints on this
study, there was a single researcher who served as the sole coder for the
content analysis as well. As this exposes the study to potential criticism for
a lack of reliability, a random sample of 10 percent of the content database
was selected for testing by a volunteer coder, as recommended by Fico,
Lacy, and Riffe (1998). Simple percentage agreement, the most widely used
standard for content analysis reliability, was chosen for the assessment of
the subjective categories of the coding scheme. (Lombard, Snyder-Duch, &
Bracken, 2002) The resulting reliability measurements are as follows:
PIANY positive or negative impact=0.8; IIABNY positive/negative
impact=0.8; positive/negative issue frame=0.60; expert positioning of either
association=1.0. These numbers indicate a substantial agreement between
the coders.
In terms of validity, a simple test of content, also known as face,
validity is effective in this circumstance; that is, a measurement
determining whether or not the coding scheme accurately reflects what the
study seeks to measure. (Stacks, 2006) This is the most commonly used
RUNNING HEAD: Content analysis in public relations 24
form of validity evaluation and is typically very straightforward. (Holsti,
1969) The assessment of content validity found that the coding scheme is, in
fact, appropriate for the content being examined. It accurately measures
the article type, prominence, and tone.
Results
The findings of the competitive analysis show both strengths and
weaknesses in the coverage obtained by PIANY. As shown in Table 1, PIANY
had a higher incidence of article mentions than IIABNY in the sample
overall, with 20 mentions compared to just 12; this means PIANY was
represented independently 46 percent of the time, as opposed to IIABNY's
26 percent. Furthermore, PIANY had more mentions compared to its
competitor across the board, when looking at each publication individually
(Insurance Journal: 3 vs. 2; National Underwriter: 3 vs. 1; Insurance
Advocate: 14 vs. 9). Thirty percent of the examined articles mentioned both
associations.
Table 1. Association mentions in examined publications.
Insurance Journal
National Underwrite
r
Insurance Advocate
TOTALS
PIANY only 3 3 14 20IIABNY
only2 1 9 12
Both 2 2 10 14TOTALS 7 6 33 46
Topics covered
RUNNING HEAD: Content analysis in public relations 25
In terms of topics covered in the reviewed articles, PIANY had a wider
representation across a variety of subjects. The association was mentioned
independently of the IIABNY in articles on the following subjects:
legislation, regulation, carrier relations, personal lines, commercial lines
and personnel. The IIABNY had independent mentions in the categories of
events and education.
However, it is important to consider in which categories articles
mentioning both the associations fell. Legislation was the most popular, with
both associations named in eight of the articles, followed by events (four
mentions), then technology and education, with one mention each.
Also of note are the categories that were covered most often overall,
regardless of association mentioned. Legislative topics counted for 18
articles in the sample, followed by events, with 12 articles. Personnel came
in a distant third place, with five articles covering the subject.
Types of articles
In considering the types of articles in the sample, overall, features
stories were the most likely to mention either of the associations (12
articles), while news reports and columns counted for 11 articles each.
PIANY had higher numbers in columns (7 vs. 0), features (4 vs. 3); and
announcements (3 vs. 2). IIABNY was more successful in editorials (2 vs. 1),
while the associations tied for independent mentions in both letters to the
RUNNING HEAD: Content analysis in public relations 26
editor and news reports.
The type of article most likely to mention both associations was the
feature (five articles), followed by columns (four articles), then news reports
(three articles).
Prominence
Looking at article length, PIANY had the largest share of articles
measuring at more than two pages—25 percent of articles mentioning just
PIANY were this long, compared to just 8 percent mentioning IIABNY alone.
Articles mentioning both associations that were more than two pages in
length accounted for 79 percent of such articles.
PIANY far overshadowed IIABNY in the instances of expert quotes. In
articles mentioning one association, PIANY had nine quotes compared to
the IIABNY's three. In articles mentioned both associations, PIANY still had
the upper hand, with 10 quotes compared to six.
The associations had the same number of instances of headlines
mentions in articles unique to each organization (five each), but IIABNY is
the only group of the two that was granted a headline mention in a
combined article. PIANY was seen on the cover of the examined publication
in reference to four of the articles; this happened with IIABNY two times.
PIANY was mentioned first in eight stories about both associations; IIABNY
RUNNING HEAD: Content analysis in public relations 27
was mentioned first six times.
Looking at the prevalence of images, the associations had an even
number in articles mentioning one of them (24 each), but IIABNY was better
represented in articles naming both groups, with five compared to PIANY's
three.
Finally, concerning placement in the publication, PIANY's mentions
occurred in the first-half or center material in 15 cases; this was seen in
eight instances for IIABNY and nine instances when mentioning both
associations.
Tone
In terms of tone, PIANY was positioned as an industry expert in 15
pieces mentioning one association; IIABNY was positioned similarly in 11
cases. Looking at articles naming both organizations, PIANY was seen as an
expert in 13 instances, while IIABNY was seen as an expert in 11 of those.
Seventeen articles mentioning solely PIANY have a perceived positive
impact for the association; 12 of these emerged for IIABNY. For articles
concerning both groups, PIANY incurred 13 positive instances, while
IIABNY had 12. And, in looking at issue frame, PIANY articles were
concerning positively framed issues 65 percent of the time, and negative
ones 30 percent of the time. For IIABNY, the articles were on issues that
RUNNING HEAD: Content analysis in public relations 28
were positively framed 75 percent of the time, and negative just 8 percent.
Tone was broken down further along the lines of publication, as seen
in Table 2 and Table 3. While PIANY had a larger number of positive-impact
mentions in the National Underwriter and the Insurance Advocate, IIABNY
had the upper hand in the Insurance Journal. Also of note, while PIANY had
only positive and neutral mentions across the board, IIABNY had one
negative-impact article, in the National Underwriter.
Table 2. Tone in articles mentioning PIANY.
Insurance Journal
National Underwriter
Insurance Advocate
TOTALS
Positive 3 4 22 29Negative
0 0 0 0
Table 3. Tone in articles mentioning IIABNY.Insurance Journal
National Underwriter
Insurance Advocate
TOTALS
Positive 4 2 18 24Negative
0 1 0 1
Discussion
RUNNING HEAD: Content analysis in public relations 29
The goal of this study was two-fold: first, to create a relevant coding
system with which to conduct an effective content analysis of publications of
consequence to PIANY. Second, this scheme was used to conduct a
benchmark analysis of association-specific coverage in this media to act as a
base for future use in media relations tracking and to help assess the output
of PIANY's public relations efforts.
Impressions of the coding scheme
The content analysis and coding scheme developed through the
course of this project provided an effective tool for analyzing and assessing
the type of and extent of media coverage in trade publications of both
PIANY and it’s competitor association, IIABNY. Researchers were able to
easily categorize each article in the sample, while also taking a look at
particular features of the articles that are directly related to effectiveness of
public relations. That being said, the scheme was not without significant
flaws. Like all other content analysis methods, one of its limitations is the
prevalence of subjectivity. Due to the nature and time constraints associated
with this specific study, only one other coder was used to test reliability—
future iterations of this method would need significantly more in order to
truly determine whether or not the scheme was reliable, particularly in
considering issues of tone.
Furthermore, the results of the analysis do not tell us anything about
RUNNING HEAD: Content analysis in public relations 30
the actual effectiveness of media coverage—what outcomes does PIANY
hope to obtain by having a professional read a news story about them? What
action would be most desirable? Even if these “action goals” were
determined, how could a method like content analysis determine the
ultimate effectiveness of press coverage in achieving these goals? The
benchmark analysis used in this study provides a quantitative assessment—
number of occurrences, prominence figures, and even to some extent, an
evaluation of article tone. But this information does not tell PIANY the
effects of their media coverage—it simply outlines the results of their public
relations efforts, in terms of actual print. The researchers have run into an
issue that public relations professionals and academes have struggled with
ad nauseum: How do you measure public relations effectiveness?
Benchmark results
However, the study was not for naught—it does provide a starting
place for PIANY to assess the effectiveness of their outreach to the trade
publications, especially in terms of actual publication. This important factor
is not to be diminished; after all, being in print will inherently cause readers
to know that an organization exists and inform them about its positions on
certain issues.
PIANY had the most coverage overall, and in each individual
publication. However, this does not necessarily mean more attention will be
RUNNING HEAD: Content analysis in public relations 31
paid to this organization over its competitor—many of the articles
discussing PIANY were either mentions of members or some type of listing
of officers. These types of “articles” are not the best gauge of effectiveness,
as they do not provide any opinion, nor do they position the organization as
an expert in any way. But, they do provide some benefit to the association,
as they recognize its volunteer directors, helping to promoted membership.
Of greater note is that PIANY received a higher number of
independent article mentions; that is, the organization received more
coverage solely about itself, without mention of the IIABNY. This helps
maintain the separation between the two in the public’s mind. However,
one-third of the articles did mention the two organizations together, making
it worth considering joint publicity efforts on some occasions, particularly
on issues where the associations have the same point of view. Since
legislative issues were those that were picked up by the most in the trades,
perhaps this would be a good place to start.
Another interesting point to take a look at is the fact that IIABNY
received more headline mentions than PIANY. This suggests that, on some
level, the opinion of IIABNY might be more highly valued than that of
PIANY. However, it is important to note that the IIABNY also was the only
one of the two associations that received coverage that was interpreted by
coders to be negative. Upon further reflection, it may not be that IIABNY
gets more respect, per se, but that the association tends to garner more
RUNNING HEAD: Content analysis in public relations 32
attention, as they are the more vocal group, quick to make statements,
whether they are justified or not. PIANY has a much more conservative and
reactive approach, opting to consider all options prior to “banging the
drum” on a given issue. While IIABNY may be more prominent in this
respect, it does not mean that this equates with positive opinions from
readers.
Recommendations for future research
This type of analysis should be conducted on a regular basis by PIANY,
in order to be able to compare their level of coverage to the IIABNY on a
historical basis. Furthermore, in future iterations, practitioners need to
consider perhaps a larger, more comprehensive sample, maybe even a full
census of publications in a shorter time frame; additionally, researchers may
want to consider filtering out units of analysis considered based on type
(e.g., features vs. personnel mentions having different weights).
Future versions of this coverage analysis also need to consider
methods for assessing the effectiveness of media relations efforts. A survey
of members who read a given publication may be of interest as a way of
assessing opinions of press coverage. (O’Neill 1984 in Hon, 1998) Or,
perhaps a pre- and post-test can be conducted after exposure to an article
on a specific issue, to see if opinions are changed based on a news story.
The key is not to get into the never-ending trap of simply measuring news
RUNNING HEAD: Content analysis in public relations 33
output, such as placement, rather than determining if desired effects are
achieved from coverage. (Hon, 1998)
Public relations professionals and researchers alike also need to be
cognizant of budgetary constraints, and effective arguments for
surmounting them. After all, no money means no good measurement, which
results in work never being properly assessed. Leinemann and Baikaltseva
(2004) suggest 10 percent of a public relations budget be dedication to
evaluation, which I also subscribe to. How else can you know what works
and where mistakes have been made? Evaluation should be a non-
negotiable, as it ensures that future efforts are more effective and efficient.
RUNNING HEAD: Content analysis in public relations 34
References
About PIA. (2010) Retrieved from Professional Insurance Agents
Association: http://www.pia.org/aboutpia.php
Austin, E. W. and Pinkleton, B. E. (2006) Strategic public relations
management.
Lawrence Erlbaum Associates, Publishers: Mahwah, N.J.
Baikatseva, E. and Leinemann, R. (2004) Media relations measurement.
Gower
Publishing Company: Burlington, Vt.
Berelson, B. (1971). Content analysis in communication research. Hafner
Publishing
Company: New York.
Bland, M., Theaker, A., and Wragg, D. (2005) Effective media relations.
Chartered
Institute of Public Relations: Sterling, Va.
RUNNING HEAD: Content analysis in public relations 35
Botan, C. H. and Taylor, M. (2004). Public relations: State of the field.
Journal of
Communication, 54(4), 645-661.
Burton, P. (1966) Corporate public relations. Reinhold Publishing
Corporation: New
York.
Carney, T. F. (1972). Content analysis: A technique for systematic inference
from
communications. University of Manitoba Press: Winnipeg, MB.
Carroll, C.E. (2009) The relationship between firms' media favorability and
public
esteem. Public Relations Journal, 3(4).
Chegg Smith, K., Wakefield, M., Siebel, C., Szcypka, G., Slater, S., Terry-
McElrath, Y.,
Emery, S., Caloupke, F.J. (2002) Coding the news: The development of a
methodology for coding and analyzing newspaper coverage of tobacco
issues.
Retrieved from ImpactTEEN:
http://www.impacteen.org/generalarea_PDFs/Newsmethodspaper_smith
MAY2002.pdf.
Fairness and Accuracy in Reporting. (2001) How to detect bias in news
media. The
RUNNING HEAD: Content analysis in public relations 36
Media and You, 12.
Fico, F. G., Lacy, S., and Riffe, D. (1998) Analyzing media messages.
Lawrence Erlbaum
Associates, Publishers: Mahwah, N.J.
Holsti, O. R. (1969). Content analysis in the social sciences and humanities.
Addison
Wesley Publishing Company: Reading, Ma.
Hon, L. C. (1998). Demonstrating effectiveness in public relations: Goals,
objectives and
evaluation. Journal of Public Relations Research, 10(2), 103-135.
Insurance Advocate. (2010). Insurance Advocate 2010 media kit. Mt.
Vernon, N.Y.
Insurance Journal. (2010). Insurance Journal 2010 media kit. Boston, Ma.
Jeffrey, A., Michaelson, D., and Stacks, D. (2006) Exploring the link between
media
coverage and business outcomes. Retrieved from Institute for Public
Relations:
http://www.instituteforpr.org/file/upload/Media_Coverage_Business06.
pdf.
Likely, F. (2000). Communication and PR: Made to measure — How to man-
age the
measurement of communication performance. Strategic Communica-
RUNNING HEAD: Content analysis in public relations 37
tion
Management, 4, 22-27.
Lindenmann, W. K. (2005). Putting PR measurement and evaluation into
historical perspective. Retrieved from the Institute for Public Relations:
http://www.instituteforpr.org/files/uploads/PR_History2005.pdf.
Lindenmann, W. K. (2006). Public relations research for planning and
evaluation.
Retrieved from the Institute for Public Relations:
http://www.instituteforpr.org/files/uploads/2006_Planning_Eval.pdf.
Lombard, M., Snyder-Duch, J., and Bracken, C. C. (2002). Content analysis
in mass communication. Human Communication Research, 28(4), 587-
604.
Lynch, S. and Peer, L. (2002). Analyzing newspaper content: A how-to guide.
Retrieved from Readership Institute Media Management Center at
Northwestern
University:
http://www.readership.org/content/content_analysis/data/How-to.pdf.
McLellan, M. and Porter, T. (n.d.). Content analysis guide. Retrieved from
News
Improved:
http://www.newsimproved.org/document/GuideContentAnalysis.pdf.
Michaelson, D. and Griffin, T. L. (2005) A new model for media content
RUNNING HEAD: Content analysis in public relations 38
analysis.
Retrieved from the Institute for Public Relations:
http://www.instituteforpr.org/files/uploads/MediaContentAnalysis.pdf.
Stacks, W. D. (2006). Dictionary of public relations measurement of
research.
Retrieved from the Institute for Public Relations:
http://www.instituteforpr.org/files/uploads/PRMR_Dictionary_1.pdf.
Summit Media. (2010). National Underwriter 2010 media kit. Hoboken, N.J.
Williams, S. D. (2009). Measuring Company A: A case study and critique of a
news
media content analysis program. Retrieved from Institute for Public
Relations:
http://www.instituteforpr.org/files/uploads/Measuring_Company_A.pdf.
Appendix 1
Coding sheet.Coding sheet for PR content analysis
GENERAL INFORMATION
ID #: ____________________________________________
RUNNING HEAD: Content analysis in public relations 39
Title: ____________________________________________
Date: ____________________________________________
Publication:□Insurance Advocate □National Underwriter □Insurance Journal
Primary topic: CIRCLE ONE Secondary topics (if any): CIRCLE ONEPersonal lines Automobile
Commercial lines Producer compensation disclosure
Legislation Health care
Regulation Economy
Event coverage Lobbying
Education Excess/surplus
Technology Other (describe):
Carrier relations
Personnel
Other (describe):
Associations mentioned: □PIANY □IIABNY □Both
Type of piece:□Feature story □Column □Information/announcements □News report □Editorial □Letter to the editor
Appendix 1 (contd)
PROMINENCE
Page number: ___ out of ___Cover story? □Yes □NoAssociation mentioned in headline? □PIANY □IIABNYWhich association mentioned first? □PIANY □IIABNY
RUNNING HEAD: Content analysis in public relations 40
Is there an image? □Yes □NoIs it association specific? □No □PIANY □IIABNY
Quote from PIANY? □Yes □NoHow many? _____
Quote from IIABNY? □Yes □NoHow many? _____
FRAME & TONE
Does this article give a positive or negative impression to readers about PIANY? (circle one)
1————2————3————4————5 Very positive Somewhat positive Neutral Somewhat negative Very negative
Does this article give a positive or negative impression to readers about IIABNY? (circle one)
1————2————3————4————5 Very positive Somewhat positive Neutral Somewhat negative Very negative
Is PIANY positioned as an industry expert? □Yes □NoIs IIABNY positioned as an industry expert? □Yes □No
Is the underlying issue framed as positive or negative by the journalist? (circle one)
1————2————3————4————5 Very positive Somewhat positive Neutral Somewhat negative Very negative
Appendix 2
RUNNING HEAD: Content analysis in public relations 41
Results from coding of magazine articles.
Top Related