His tweets speak for themselves: an analysis of Donald ...

29
This item was submitted to Loughborough's Research Repository by the author. Items in Figshare are protected by copyright, with all rights reserved, unless otherwise indicated. "His tweets speak for themselves": an analysis of Donald Trump's Twitter "His tweets speak for themselves": an analysis of Donald Trump's Twitter behaviour behaviour PLEASE CITE THE PUBLISHED VERSION https://doi.org/10.18848/2327-0071/CGP/v15i01/11-35 PUBLISHER Common Ground Research Networks VERSION VoR (Version of Record) PUBLISHER STATEMENT This is an Open Access Article. It is published by Common Ground Research Networks under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Licence (CC BY-NC-ND 4.0). Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/. © Common Ground Research Networks, Suzanne Elayan, Martin Sykora, Tom Jackson, Some Rights Reserved, (CC BY-NC-ND 4.0). Permissions: cgscholar.com/cg_support. LICENCE CC BY-NC-ND 4.0 REPOSITORY RECORD Elayan, Suzanne, Martin Sykora, and Tom Jackson. 2020. “"his Tweets Speak for Themselves": An Analysis of Donald Trump's Twitter Behaviour”. Loughborough University. https://hdl.handle.net/2134/12674342.v1.

Transcript of His tweets speak for themselves: an analysis of Donald ...

Page 1: His tweets speak for themselves: an analysis of Donald ...

This item was submitted to Loughborough's Research Repository by the author. Items in Figshare are protected by copyright, with all rights reserved, unless otherwise indicated.

"His tweets speak for themselves": an analysis of Donald Trump's Twitter"His tweets speak for themselves": an analysis of Donald Trump's Twitterbehaviourbehaviour

PLEASE CITE THE PUBLISHED VERSION

https://doi.org/10.18848/2327-0071/CGP/v15i01/11-35

PUBLISHER

Common Ground Research Networks

VERSION

VoR (Version of Record)

PUBLISHER STATEMENT

This is an Open Access Article. It is published by Common Ground Research Networks under the CreativeCommons Attribution-NonCommercial-NoDerivatives 4.0 International Licence (CC BY-NC-ND 4.0). Fulldetails of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/. © Common GroundResearch Networks, Suzanne Elayan, Martin Sykora, Tom Jackson, Some Rights Reserved, (CC BY-NC-ND4.0). Permissions: cgscholar.com/cg_support.

LICENCE

CC BY-NC-ND 4.0

REPOSITORY RECORD

Elayan, Suzanne, Martin Sykora, and Tom Jackson. 2020. “"his Tweets Speak for Themselves": An Analysisof Donald Trump's Twitter Behaviour”. Loughborough University. https://hdl.handle.net/2134/12674342.v1.

Page 2: His tweets speak for themselves: an analysis of Donald ...

THESOCIALSCIENCES.COM

VOLUME 15 ISSUE 1

The International Journal of

Interdisciplinary Civic and Political Studies

_________________________________________________________________________

“His Tweets Speak for Themselves”An Analysis of Donald Trump’s Twitter Behavior

SUZANNE ELAYAN, MARTIN SYKORA, AND TOM JACKSON

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 3: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES https://thesocialsciences.com ISSN: 2327-0071 (Print) ISSN: 2327-2481 (Online) http://doi.org/10.18848/2327-0071/CGP (Journal)

First published by Common Ground Research Networks in 2019 University of Illinois Research Park 2001 South First Street, Suite 202 Champaign, IL 61820 USA Ph: +1-217-328-0405 https://cgnetworks.org

The International Journal of Interdisciplinary Civic and Political Studies is a peer-reviewed, scholarly journal.

COPYRIGHT © 2019 (individual papers), the author(s) © 2019 (selection and editorial matter), Common Ground Research Networks

Some Rights Reserved. Public Licensed Material: Available under the terms and conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License (CC BY-NC-ND 4.0). The use of this material is permitted for non-commercial use provided the creator(s) and publisher receive attribution. No derivatives of this version are permitted. Official terms of this public license apply as indicated here: https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode

Common Ground Research Networks, a member of Crossref

EDITOR Marcin Galent, Jagiellonian University, Poland

ACTING DIRECTOR OF PUBLISHING Jeremy Boehme, Common Ground Research Networks, USA

MANAGING EDITOR Megan Donnan, Common Ground Research Networks, USA

ADVISORY BOARD The Advisory Board of the Interdisciplinary Social Sciences Research Network recognizes the contribution of many in the evolution of the Research Network. The principal role of the Advisory Board has been, and is, to drive the overall intellectual direction of the Research Network. A full list of members can be found at https://thesocialsciences.com/about/advisory-board.

PEER REVIEW Articles published in The International Journal of Interdisciplinary Civic and Political Studies are peer reviewed using a two-way anonymous peer review model. Reviewers are active participants of the Interdisciplinary Social Sciences Research Network or a thematically related Research Network. The publisher, editors, reviewers, and authors all agree upon the following standards of expected ethical behavior, which are based on the Committee on Publication Ethics (COPE) Core Practices. More information can be found at https://cgnetworks.org/journals/publication-ethics. ARTICLE SUBMISSION The International Journal of Interdisciplinary Civic and Political Studies publishes biannually (June, December). To find out more about the submission process, please visit https://thesocialsciences.com/journals/call-for-papers.

ABSTRACTING AND INDEXING For a full list of databases in which this journal is indexed, please visit https://thesocialsciences.com/journals/collection.

RESEARCH NETWORK MEMBERSHIP Authors in The International Journal of Interdisciplinary Civic and Political Studies are members ofthe Interdisciplinary Social Sciences Research Network or a thematically related Research Network. Members receive access to journal content. To find out more, visit https://thesocialsciences.com/about/become-a-member.

SUBSCRIPTIONS The International Journal of Interdisciplinary Civic and Political Studies is available in electronic and print formats. Subscribe to gain access to content from the current year and the entire backlist. Contact us at cgscholar.com/cg_support. ORDERING Single articles and issues are available from the journal bookstore at https://cgscholar.com/bookstore.

HYBRID OPEN ACCESS The International Journal of Interdisciplinary Civic and Political Studies is Hybrid Open Access, meaning authors can choose to make their articles open access. This allows their work to reach an even wider audience, broadening the dissemination of their research. To find out more, please visit https://cgnetworks.org/journals/hybrid-open-access. DISCLAIMER The authors, editors, and publisher will not accept any legal responsibility for any errors or omissions that may have been made in this publication. The publisher makes no warranty, express or implied, with respect to the material contained herein.

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 4: His tweets speak for themselves: an analysis of Donald ...

The International Journal of Interdisciplinary Civic and Political Studies Volume 15, Issue 1, 2020, https://thesocialsciences.com© Common Ground Research Networks, Suzanne Elayan, Martin Sykora, Tom Jackson, Some Rights Reserved (CC BY-NC-ND 4.0). Permissions: cgscholar.com/cg_support ISSN: 2327-0071 (Print), ISSN: 2327-2481 (Online) https://doi.org/10.18848/2327-0071/CGP/v15i01/11-35 (Article)

“His Tweets Speak for Themselves”: An Analysis of Donald Trump’s Twitter Behavior

Suzanne Elayan,1 Loughborough University, UK Martin Sykora, Loughborough University, UK Tom Jackson, Loughborough University, UK

Abstract: We used computational tools to explore president Donald Trump’s tweeting habits and some of the effects they have on his Twitter followers as well as a small sample of mainstream media. To gain a comprehensive picture of Trump’s tweeting habits as President of the United States, we have undertaken a study of Trump’s personal @realDonaldTrump Twitter account, focusing on his campaign, the transition period before his presidency, and his first two hundred days in office. We employed three state-of-the-art computational tools to analyze sentiment, emotions, and psycholinguistic features in Trump’s tweets as well as a manual semantic discourse analysis to decipher what it is about his communication methods that generates the highest responses and retweets. We found that during the first two hundred days of presidency, an accusative tone of discourse was most frequently used by @realDonaldTrump, and among a number of significant emotional patterns, we observed an intriguing correlation in that the more negative the overall message of a tweet, the more likely it is to be retweeted and favorited. We also find individual tweets to be discussed with high frequency in mainstream media and used data from the PTDC corpus (Political Twitter Discourse Corpus) consisting of 205,303 original tweets of all current US state governors, members of the US Senate, and members of Congress and found that Trump’s tweeting style is significantly different along a central dimension of language on Twitter. Our findings suggest that the general public on Twitter responds more actively to negative language, and in turn the language on Twitter employed by Trump is highly emotional with more-than-expected emotion-bearing expressions.

Keywords: Social Media, Politics, Sentiment Analysis, Presidency, Natural Language Processing, Text Analytics

Introduction

he 2008 US presidential campaign was a game changer as it was the first US presidential campaign to harness the full power of social media. Former US president Barack Obama utilized social media platforms to engage with voters (Learmonth 2008); now, just over a

decade later, campaigning through social media platforms has become commonplace. In fact, it has now become standard practice for politicians, country leaders, and religious leaders to use social media to engage with their social media followers, even when not campaigning. For example, Pope Francis and former Pope Benedict XVI both tweet actively. The Dalai Lama also tweets regularly, generating between 47 thousand to over 8 million “likes”2 per tweet with up to two million retweets. The Chechen leader, Ramzan Kadyrov, regularly posts updates on Instagram, once using this platform to search for his missing cat. Former Governor Arnold Schwarzenegger posts workout ideas on Snapchat and Instagram. Other political figures, such as India’s Prime Minister Narendra Modi, former Prime Minister Theresa May, Slovakia’s president Zuzana Caputova, and Queen Rania of Jordan use Twitter frequently.3 This paper uses primarily computational methods to explore president Donald Trump’s tweeting habits and the reactions they evoke online by the general public on Twitter. We also provide a few examples of some mainstream media outlets that report on his tweets thus amplifying their reach.

1 Corresponding Author: Suzanne Elayan, Brockington Building, Centre for Information Management, Loughborough University, Loughborough, Leicestershire, LE11 3TU, UK. email: [email protected] 2 What would usually be known as the thumbs up or “Likes” on Facebook, is a “Like”/“Favorite” on Twitter. 3 Besides the prominent users mentioned, an up-to-date list of most followed accounts can be found at https://www.trackalytics.com.

T

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 5: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

@realDonaldTrump is a very busy Twitter account indeed. Tweets from this account have made headlines numerous times, generating thousands of replies, retweets, and discussions worldwide. The curious case of Donald Trump’s Twitter popularity has already piqued the interest of academics. In the years he has been in office, academics have been analyzing Trump’s Twitter use, especially since his behavior on social media is creating a new public narrative about issues such as misogyny, racism, and xenophobia (Giroux 2017). For example, Ahmadian, Azarshahi, and Paulhus (2017) examined several aspects of Trump’s communication style, comparing him with other Republican candidates in order to understand the reasons behind his success in spite of his political inexperience. They found that out of all the other Republican candidates he scored highest in grandiosity ratings, narcissism, and Twitter use. Ahmadian, Azarshahi, and Paulhus (2017) claimed that Trump’s use of low-complexity words and his informality on Twitter directly contributed to his success, since previous research by Thoemmes and Conway (2007) theorized that voters tend to prefer candidates who use simple vocabulary over those who use sophisticated and formal vocabulary. Building upon hypotheses from other researchers such as Chung and Pennebaker (2007), Ahmadian, Azarshahi, and Paulhus (2017) concluded that Trump’s self-promotional style and his excessive use of first-person pronouns reflect his arrogance as well as an insecure personality.

Ott (2017) analyzed Trump’s tweets, finding that Trump’s rhetoric on Twitter was mostly negative, containing many insults and mono-syllabic words such as “sad” and “bad.” Gross and Johnson’s (2016) analysis of Trump’s tweets reached a similar conclusion: his tweets were predominantly negative, as were those of other presidential candidates. When comparing Trump’s tweets with other candidates, they observed a pattern: although candidates’ negative tweets targeted those who are ahead of them in the polls, Trump attacked all his opponents, even those who did badly in the polls. It is also worth noting that Gross and Johnson (2016) found that although Trump posted mostly negative tweets, he himself was a target of negative tweets. Despite the above-mentioned studies that explored Trump’s Twitter content, use, and style (e.g., Ott 2017; Ahmadian, Azarshahi, and Paulhus 2017), we have not yet come across studies that have applied established, fine-grained advanced sentiment analysis tools to assess the diversity of emotional language use in Trump’s tweets, and how these differ across campaign, the transition period, and his actual presidency.

Academic research studying Trump’s tweeting practices has taken further innovative angles; Wang, Li, and Luo (2016) examined which of Trump’s tweets produced the most “likes” and identified that negative tweets directed at Democrats were the most popular among Trump’s Twitter followers. Wang et al. (2016) studied Trump’s Twitter supporters and found that they tend to be either seventeen-years old or younger or forty-one years old or older, which explains the reasons behind Trump Twitter supporters being more susceptible to “clickbait” stories (Lawrence and Boydstun 2017). Kollanyi, Howard, and Woolley’s (2016) research on Trump and Twitter took another direction, finding that a third of pro-Trump tweets were generated by bots. Wells et al. (2016) studied Trump’s tweeting patterns and the relation to traditional media coverage and found that it was his followers’ reactions to his tweets that provoked media attention. Wells et al. (2016) suggested that Trump purposely releases a “Tweetstorm” when his media coverage is low. As has been hypothesized and now empirically shown (e.g., Ott 2017; Ahmadian, Azarshahi, and Paulhus 2017) Trump tends to use simple and informal language. Jordan and Pennebaker (2017, 313) argued that Trump stands out in the dimension of “analytic-narrative thinking style” compared to his political peers. An “analytic thinking style” is characterized as “careful, effortful deliberation based on reason and logic,” as opposed to the “narrative thinking style” which is described as “quick gut reactions, grounded in intuition and personal experience.” Using computerized text analysis (LIWC—Linguistic Inquiry and Word Count; see Method section), Jordan and Pennebaker (2017) found that

12

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 6: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

Trump’s word-use in speeches, debates, and various documents rank low in language use associated with analytic thinking when compared with past presidents.4 Despite these findings, Jordan and Pennebaker (2017) highlight a broad long-term trend in less analytic language use by presidents over recent decades. These results have not been extended to Twitter (or other social media platforms), and what is more, there is no empirical evidence that has yet compared Trump’s analytic thinking and other core psychometric markers (see Tausczik and Pennebaker 2010) to his political peers on Twitter.

Therefore, in this work we analyzed Trump’s language, style, communicated sentiment, and explicit use of emotional language on Twitter (RQ1 and RQ2, below). We further compared his Twitter-language across a corpus of Twitter-language of his Republican and Democratic political peers along several core language dimensions (RQ3). Finally, to demonstrate the impact of individual tweets in news (RQ4), we employed a Webometrics technique to attempt to quantify specific tweet mentions in a small selection of mainstream media.

Research Question 1 (RQ1)

RQ1: How does Trump’s tweeting activity compare with observable differences across the campaign months (until election victory), transition, and first 200 days of presidency; specifically, which sentiment and emotion-bearing language, as detected with validated computational tools, dominate Trump’s tweets?

The benefits, and potential limitations of automated methods, are discussed in the “Analysis Methods” section, and although the validity of these tools in measuring expressed sentiment has been previously established, we manually reviewed a sample of the dataset to help qualify tool performance on Trump’s tweet data. In line with previous work (Rosenberg et al. 1990; Zamith and Lewis 2015), we argue that using these computational tools can give us a fairly unbiased overview of emotional language use, with high reliability and good coverage. A substantial weakness of automated methods nevertheless is their inability to represent broader world-knowledge (Marcus and Davis 2019), and where specific context and such world-knowledge is necessary to establish what (a) topics and (b) functional/style of discourse is being employed and expressed. Hence, beyond the automated tools, an in-depth semantic discourse analysis was undertaken on a sample of tweets from Trump’s presidency to provide much needed qualitative understanding so often omitted from previous studies relying on computational approaches (Zamith and Lewis 2015).

Research Question 2 (RQ2)

RQ2: During Trump’s presidency, are there noteworthy, observable qualitative characteristics of Trump’s functional Twitter language use, such as accusative tone and other styles of expressions, and what references to topics does Trump tend to make with these functional expressions?

The variety of functional style of expression during Trump’s actual presidency has been mostly overlooked in the previously cited studies, and hence RQ2 will help answer questions around the heterogeneity of topics and functional style of expression in Trumps tweets.

Research Question 3 (RQ3)

RQ3: Previous findings using LIWC have shown that Trump’s use of language is significantly low on the analytical thinking dimension. Does this extend to social media (i.e., Twitter), and how do the central dimensions of his language use differ across his political peers on Twitter, as well as other leaders?

4 Specifically, there are differences in use of nouns, articles, pronouns, prepositions, auxiliary verbs, and adverbs.

13

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 7: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

Finally, given Trump’s extensive use of Twitter and the potential of his tweets playing a substantial part in news reporting, we have included a fourth research objective, which is a first attempt at using webometrics method to quantify and substantiate news reporting around individual tweets.

Research Question 4 (RQ4)

RQ4: Do Trump’s individual tweets become the subject of news articles that we can quantify employing a standard Webometrics method?

Given the success of Trump’s campaign in becoming president, and the significance of his Twitter account in ridiculing and targeting mainstream media, including accusations of mis- and disinformation (Ross and Rivers 2018a), we believe it highly relevant to investigate Trump’s language use within his tweets. We argue that studying and understanding a political leader’s language can provide insights into the future of liberal democracy, and since “his tweets speak for themselves” as his former White House press secretary Sean Spicer has often said, we argue that Twitter language is also worthy of study.

Method

In this section, we first provide an overview of what social media datasets were used and how they were retrieved, followed by details of methods employed in the analysis of the Twitter data.

Sample

The data within this research study was retrieved using the public Twitter REST / Search API (Twitter 2017), with the data collection undertaken in line with Twitter’s terms of service and their official developer guidelines. Data from the Twitter account @realDonaldTrump (4,218 tweets in total), covering all tweets sent during the time period between 17th March 2016–7th August 2017 were retrieved. This time frame covers over 7 months of the campaign period, up until the 200th day of Trump’s presidency, which makes both time periods comparable in terms of the number of days and includes the important transition period between the election and the commencement of the presidency.

To facilitate the in-depth semantic analysis, a non-probability sample of @realDonaldTrump tweets was selected for manual analysis. This sample contained three hundred tweets and covered the time period between and inclusive of 29th May–10th July 2017. This was a contiguous sample during Trump’s presidency allowing a qualitative insight into the topics and style/tone of expression, specifically during his presidency. The sample was contiguous to allow checking for any potential correlations over time (none were found) and was motivated by the work of (Kim et al. 2018) who have shown empirically that such a sampling approach on Twitter, tends to lead to topic and expression style saturation fairly quickly. Work by Le et al. (2019) found that three very different sampling approaches (most retweeted, random, and latent Dirichlet allocation Topic modelling samples) of only one-hundred tweets each, resulted in comparable coverage of themes within their Twitter data, further lending support to the important work by Kim et al. (2018).

Finally, to facilitate the relative comparison of Trump’s Twitter language use with his political peers (other Democrats and Republicans on Twitter), we employed the Political Twitter Discourse Corpus (PTDC), as presented in Ross and Rivers (2018a). The relative comparison was facilitated by the LIWC text analysis tool (see next section). PTDC consists of 205,303 individual tweets and 4,659,381 words, from the most recent tweets (up to a maximum of 3000 per user) of all serving US state governors, members of congress, and senators, as of October 2017. This corpus is available and can be accessed online at Ross and Rivers (2018b).

14

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 8: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

Analysis Methods

Overall, our methodological approach followed the “hybrid” method, in line with what has been outlined in (Zamith and Lewis 2015), that is, combining several computational state-of-the-art tools and a more nuanced qualitative discourse analysis to aid understanding of communicated textual content, stemming from several complementary methods. In order to provide a nuanced insight into Trump’s tweeting behavior and style of communicated messages, a manual semantic discourse analysis (Van Dijk 1985) of Trump’s tweets was conducted over the 300 tweet sample mentioned above. Van Dijk (1985) combined discourse analysis and semantic analysis with the ultimate aim of interpreting discourse by assigning meaning and reference, while taking context into consideration. Functionality is a chief component of semantic discourse analysis; hence the meaning of expressions is a function. Each of Trump’s tweets in the 300-tweet sample was analyzed to determine its function as well as the topic to which Trump was referring. This was performed by a research associate with training in linguistics and discourse analysis. The reliability of this analysis was validated by a sample inter-rater annotation, with confirmed high agreement. The analysis provides a more accurate and nuanced understanding of the tweets’ contents than automated computational toolkits would generally allow since the nature of the content annotation is highly context dependent and beyond the ability of current tools, needing to rely on semantic models of general world-knowledge (Marcus and Davis 2019). See, for instance, Tausczik and Pennebaker (2010) for a discussion of how meaning and intent of language tends to be miscoded by automated computational methods. Nevertheless, due to their scalability and impracticality of manually annotating larger datasets consistently, three separate computational state-of-the-art analysis toolkits were employed for the analysis of (a) sentiment, (b) emotions, and (c) psycholinguistic features for addressing RQ1 and RQ3. Specifically, the Vader system (Hutto and Gilbert 2014) was employed to estimate sentiment polarities (negative and positive sentiment) in tweets. Where specified in our analysis we also employed a heuristic of only selecting the clearly positive or negative messages—i.e., compound Vader > 0.8 or < -0.8 (maximum possible scores being -1 to 1; hence these are at the extremes of positive and negative language). A recent, comprehensive systematic review across eighteen labelled datasets of twenty-four popular sentiment analysis tools (e.g., SentiStrength-2, Vader, Semantria, SO-CAL, Stanford RNTN Deep Model) by Ribeiro et al. (2016) consistently ranked the Vader tool highest across all datasets in terms of accuracy and coverage of positive and negative sentiment expressions. In addition, we employed the EMOTIVE advanced sentiment detection system (Sykora et al. 2013) that detects and measures eight basic emotions: anger, confusion, disgust, fear, happiness, sadness, shame and surprise. This system is based on an extensive and validated semantic model (which is a map of words/expressions represented in a formal ontology for efficient and comprehensive information retrieval designed for sentiment analysis in big data), associated with emotions, employing statistical word sense disambiguation to improve precision of the emotion scores.

Finally, for psycholinguistic features, the LIWC (Tausczik and Pennebaker 2010) psycholinguistic tool, which has been used extensively in psycholinguistics for studies of psychological and related personality traits, was employed. LIWC is a text analysis tool that measures how frequently a piece of text uses words in different categories, such as “cognitive processes,” “function words,” “social processes,” “analytic thinking,” or different types of grammar, which have been correlated with personal and psychological attributes in the past (see Tausczik and Pennebaker [2010] for a systematic overview). We use the LIWC tool in a similar manner to the corpus analysis method (e.g., Ross and Rivers 2018a), where the psychometric word-use is compared with typical Twitter-use by politicians, employing a reference corpus (the PTDC [Ross and Rivers 2018b]).

Due to the enforced brevity of tweets (we conducted our data collection when Twitter only allowed 140 characters), textual content commonly encountered on social media contains extensive use of slang, shorthand syntax, incorrect spelling and grammar, repeated letters and

15

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 9: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

words, inconsistent punctuation, emoticons/emojis, and overall a high proportion of OOV (Out-of-Vocabulary) terms. Tools such as Vader and EMOTIVE have been specifically customized to process such natural language accurately. Overall, our motivation behind the use of computational tools was driven by the following:

1. The tools tend to have virtually perfect reliability and hence no need to assess

inter-coder reliability. 2. They tend to be consistent, avoiding coder fatigue. 3. They are exponentially faster. 4. Finally, it is also worth pointing out that computational tools remove risk of data-

entry errors (Zamith and Lewis 2015).

For a useful general discussion of computational methods and their suitability for automating content analysis, the reader is referred to Riffe et al. (2014, 162–76). Especially with regards to the second point on consistency, it is worth pointing out that human coders can introduce subjectivity, especially with regards to annotating emotional language with increasing coder fatigue, which can systematically degrade the coding of the content (Riffe et al. 2014). Nevertheless, there are justified concerns about validity (Riffe et al. 2014) in relation to what the specific computational tool is intended to measure; therefore, clearly presenting what the tool is actually measuring is important. Also, as evidenced by Puschmann and Powell (2018), framing and public expectations around analytical capabilities of sentiment analysis have often been misinterpreted and overly optimistic. To address these types of concern, it is important to briefly clarify what the EMOTIVE and Vader tool consider to be within scope and what they actually measure in terms of emotional expressions.

The EMOTIVE tool only measures the explicit expression of an emotion in a piece of text, and it has been shown to do this accurately (Sykora et al. 2013), including negation cases (e.g., “not pleased, nor looking forward to this”) as well as conjunctions (e.g., “was horrifying but am happy now”) and scaling intensifiers (e.g., “extremely annoyed” vs. “slightly annoyed”). The terms and expressions within EMOTIVE are explicit in the sense that they are directly linked to an emotional state (e.g., “terrified,” “saddened”) rather than based on word co-occurrence terms (e.g., “dignitaries,” “dirt,” “profit”). The latter carry more semantic ambiguity, which can be problematic, but overall such terms have indeed been shown to associate with broadly negative or positive sentiments as rated by human raters (Ribeiro et al. 2016). With co-occurrence terms, lexicons tend to have higher recall while precision suffers (Ribeiro et al. 2016). Although Vader handles negations and intensifiers, it relies on implicit rather than more explicit sentiment-bearing words and expressions.5 Metaphors and rhetorical expressions are not distinguished by the automated tools. Neither tool, in its current form, determines the object of the sentiment/emotion that the author intended to address with the expressed affect, but the aim of the two tools is to assess the overall use of affective language by the author of the text. Finally, the tools employed in our study have all been previously validated on some general “gold standard” datasets.

Trump’s actual use of affect-bearing language in general may be more nuanced than the tools are able to detect. Due to possibly high sensitivity across contexts/different datasets (Ribeiro et al. 2016), we selected several samples from Trump’s tweets to manually assess whether automatically detected EMOTIVE emotions and Vader sentiments were in agreement with a human-based manual assessment of the expressed language. Out of 600 (see next section) EMOTIVE-judged emotional tweets, we manually reviewed a sample of 161 tweets and found the following: seventy-nine out of eighty-two tweets were correctly identified as expressing some happiness (during presidency), forty-two out of forty-seven as expressing 5 This lexicon is public and available at https://github.com/cjhutto/vaderSentiment/blob/master/vaderSentiment /vader_lexicon.txt.

16

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 10: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

some sadness (during presidency), and thirty-two out of thirty-two as expressing some disgust (during the campaign). Sadness was misclassified 10.6 percent of the time, followed by happiness at 3.7 percent of the time. As for Vader, out of a sample of 161 tweets judged by Vader as containing positive or negative sentiment (during presidency), fifty-two out of sixty-four tweets were correctly identified as expressing some negative sentiment, and seventy-four out of ninety-seven as expressing some positive sentiment. Positive sentiment was misclassified 23.7 percent of the time, while negative sentiment was 18.8 percent of the time. Hence, with computational tools, one must bear these error rates in mind when interpreting results.

Trump’s 200 Days in Office via Twitter: An Analysis

To gain a more complete picture of Trump’s tweeting habits as President of the United States, we have undertaken a study of Trump’s personal @realDonaldTrump Twitter account, focusing on his first 200 days in office.6 Because of the quantity of the data available, our analysis reached a number of findings, some more significant than others. For example, we observed that Trump tweeted messages with accusations more frequently on a Monday and Tuesday than other days of the week, which possibly could have been influenced by the news cycle, or as Wells et al. (2016) has suggested, Trump tends to tweet more controversially when news coverage of him is low. We also found that on average his negative tweets are more likely to be retweeted and that his positive tweets often contain negative connotations.

As presented in the methods section, three state-of-the-art computational tools were employed to analyze sentiment, emotions, and psycholinguistic features. An expert discourse analysis research associate conducted a further manual semantic discourse analysis over a small sample of Trump’s tweets (300 tweets in total). The computational tools were used to analyze 4,218 tweets from @realDonaldTrump posted during March 17, 2016 to August 7, 2017. To gain an understanding of how Trump’s tweeting might have changed since he became president, we have broken down his Twitter timeline by three main time periods of activity:

His campaign (237 days, March 17, 20167– Nov 8, 2016), transition period (72 days, Nov 9, 2016–Jan 19, 2017), and his first 200 days of presidency (200 days, Jan 20–Aug 7, 2017).

The sample of 300 tweets covers the period between May 29–July 10, 2017.

Over the entire three time periods of 509 days of tweeting, Trump’s account only had six days of which no tweets were sent. His highest Twitter activity was throughout his campaign (eleven average tweets per day), with eighty-three tweets in a single day. Zarella (2016) found that the average Twitter user tweets tour times a day, which indicates that Trump has a higher average than most Twitter users. However, Zarella (2016) also found that those with a high number of Twitter followers tended to tweet more than those with a low number of followers, which could have contributed to Trump’s frequency in tweeting. His average tweet rate per day decreased throughout the first 200 days of his presidency and is now just under six a day. During Trump’s first 200 days in office, the overall attention given to his tweets on Twitter has significantly increased in terms of other users “liking” (93 percent increase) and retweeting (18 percent increase) his tweets.

6 Trump’s 200th day in office was the August 7, 2017; his inauguration took place on the January 20, 2017. 7 This is where our dataset begins. Although it would be possible to go further back in his timeline, we judged this to be a sufficiently large dataset.

17

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 11: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

Table 1: Overview of the Tweets Analyzed, Broken down by Campaign, Transition Period, and First 200 Days of Trump’s Presidency Campaign Transition Presidency

Days analyzed 237 72 200 No. of tweets/avg. tweets per day 2,684/11.32 370/5.14 1,164/5.82

No. of days without a tweet 2 2 2 Highest no. of tweets on a day 83 11 16 Emotionality (i.e., tweets with

explicit emotions) 378 (14.10%) 75 (20.27%) 147 (12.63%)

Retweets (RTs) 144 (5.37%) 17 (4.60%) 102 (8.7%) Avg. RTs of Trump’s tweets 96,432.49 99,447.24 114,241.45

Avg. favoriting of Trump’s tweets 247,148.78 385,434.55 477,355.49 Source: Elayan, Sykora, and Jackson 2020

Overall out of the 4,218 tweets from between March 17, 2016 to August 7, 2017, the

EMOTIVE system detected 600 (14.2 percent) tweets containing explicit emotionally charged language. This is higher than the expected average emotionality, which Sykora et al. (2014) found to be around 12 percent when analyzing over 1.5 million tweets for a wider range of twenty-five national and global events and topics. An overview of emotions detected with the EMOTIVE tool is provided in Figure 1. This highlights the dominant emotions. Most anger was expressed during the transition period (7.10%), disgust during campaign (9.08% and nearly three times higher than during presidency), fear in transition (5.34%), happiness and sadness highest in presidency (52.78% and 31.24%, respectively), shame during transition (3.56%), and surprise in the campaign period was highest (18.45%).

Figure 1: Overview of Basic Emotions, expressed over the Campaign,

Transition Period, and First 200 Days of Trump’s Presidency Source: Elayan, Sykora, and Jackson 2020

18

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 12: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

Topics, Types of Messages, Sentiment, and Tweet Popularity

Based on the nuanced semantic discourse analysis of Trumps tweets (N = 300), we have found a number of indicative and interesting correlations (see Table 2 for an overview). There is a significant correlation between the functions of expressions used (e.g., accusation, threat, justification, observation, etc.) and a higher number of retweets for such tweets (ρ = .284, p<.01). This suggests that the more combinations of functions per tweet, the more likely it is to be retweeted. There is interestingly no significant connection between the number of topics mentioned in a single tweet and higher count of retweets. Seemingly topics do not matter as much as the function of the message itself. Nevertheless, our analysis has shown that users “like” both, as the number of topics and functions tend to co-occur with higher number of users favoriting those tweets (ρ = .130, ρ = .137, p<.05, respectively). Even more noteworthy is the indication that the more negative the overall message of a tweet, the more likely it is to be retweeted and favorited (ρ = -.275, ρ = -.198, p<.01, respectively).8 In fact, negative messages overall have more retweets in the analyzed dataset (N = 300), 37.50 percent in total more than positive ones. On average a positive message gets retweeted 18,789 times, whereas a negative one gets retweeted 27,952 times.9 Our analysis confirmed that negative tweets generate more retweets and favorites than positive or neutral tweets.

Table 2: Correlation Matrix of the Topic, Types of Expressions,

Sentiment, Retweets, and Favorites (Manually Analyzed Tweet Sample, N = 300) Correlations (Spearman’s rho)

No of Topics

Types of Expression

Re-tweet count

Favorite count Sentiment

No of topics Correlation Coefficient 1.000

Sig. (2-tailed) .

Types of expression

Correlation Coefficient .003 1.000

Sig. (2-tailed) .952 .

Re-tweet count

Correlation Coefficient -.064 .284** 1.000

Sig. (2-tailed) .270 .000 .

Favorite Count

Correlation Coefficient .130* .137* .910** 1.000

Sig. (2-tailed) .039 .029 .000 .

Sentiment Correlation Coefficient -.135* -.043 -.275** -.198** 1.000

Sig. (2-tailed) .019 .461 .000 .001 . *. Correlation is significant at the 0.05 level (2-tailed). **. Correlation is significant at the 0.01 level (2-tailed).

Source: Elayan, Sykora, and Jackson 2020

8 This correlation, although weak, also holds on the larger dataset of Trump’s tweets (N = 4,218) with ρ = -.119 and ρ = -.037 (significant at p<.01 and p<.05, two-tailed, respectively) for a period from March 17, 2016 to August 7, 2017, where the sentiment was judged automatically with the Vader system (Hutto and Gilbert 2014) rather than the manual discourse analysis but still provides an indicative estimate. 9 This is confirmed over the larger dataset of Trump’s tweets (N = 4,218), using Vader for sentiment polarity, where on average if a message is negative it is retweeted 15,230.03 times, as opposed to 12,172.62 times when positive (similarly for average times a tweet is favorited: 53,192.40 vs. 50,562.92).

19

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 13: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

Seemingly Positive yet Negative Messages

Trump seems to be sending out slightly more positive messages than negative ones, although this difference is not significant for the small dataset (128 positive vs. 119 negative, N = 300). Over a longer time period (N = 4,218), however, there seems to be an overall tendency for Trump to use more positive language and also to express happiness more frequently relative to negative language and sadness (527 positive vs. 216 negative, 250 happiness vs. 192 sadness, N = 4,218).10 The tweets that the tool did not highlight were either neutral or contained none of the eight basic emotions available in EMOTIVE.

Through the manual semantic discourse analysis (N = 300), we found that even when a tweet is perceived by EMOTIVE and Vader as “positive” it will contain some negativity in it, highlighting here the issue and limitation of automated approaches. Below are some example tweets showing the common pattern of a seemingly positive tweet that ends with negative undertones:

Thank you for all of the nice statements on the Press Conference yesterday. Rush Limbaugh said one of greatest ever. Fake media not happy! (Trump 2017) Proud to welcome our great Cabinet this afternoon for our first meeting. Unfortunately 4 seats were empty because Senate Dems are delaying! (Trump 2017) I just finished a great meeting with the Republican Senators concerning HealthCare. They really want to get it right, unlike OCare! (Trump 2017) Dow hit a new intraday all-time high! I wonder whether or not the Fake News Media will so report? (Trump 2017) Law enforcement & military did a spectacular job in Hamburg. Everybody felt totally safe despite the anarchists. @PolizeiHamburg #G20Summit (Trump 2017)

In the first example, while the computational tools detected positive sentiment for the first part of the tweet, EMOTIVE did not score “not happy” as “unhappy/sad.” It simply scored it as “not” “happy.”11 Another tweet that computational methods could not accurately analyze was on July 3, 2017; Trump tweeted “Dow hit a new intraday all-time high! I wonder whether or not the Fake News Media will so report?” This particular tweet provoked immense attention, with several news outlets discussing the tweet within a wider economical context. The first part of the tweet is positive, clearly indicating that the economic situation was in good shape. A passive-aggressive rhetorical question follows, calling news outlets “Fake News” and suggesting with an accusative undertone that mainstream media would not report on these positive economic developments. We investigated the particular tweet, which had 24,959 retweets and 113,903 “likes.” We found dozens of articles, blogs, and videos discussing it, with many experts contradicting what the President of the United States had claimed (Bryan 2017; Paletta and Swanson 2017; Struyk 2017). For example, Bryan (2017) reported that “[t]he Dow Jones industrial average did hit an intraday high on Monday [July 3, 2017] but it closed below

10 The sentiment on the smaller dataset (N = 300) was judged in the manual discourse analysis, whereas on the larger dataset (N = 4,218) it was judged automatically with the Vader system (Hutto and Gilbert 2014), where sentiment polarities of only the clearly positive or negative messages were considered, i.e., compound Vader, > 0.8 or < -0.8. Expressions of sadness and happiness were judged automatically using the EMOTIVE system (Sykora et al. 2013). 11 Using the word “not” before a word does not imply that the opposite meaning is intended (e.g., saying not sad does not imply that one is happy).

20

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 14: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

its record set June 19.” Paletta and Swanson (2017) reported that although unemployment had dropped from 4.8 percent to 4.3 since Trump became president, automobile sales dropped 5 percent in June, US factory output had dropped, and the construction of new homes dropped to an eight-month low. Struyk (2017) explained that the stock market had hit an all-time high in thirty of the previous fifty-four months, which indicates that the stock market had been doing well since recovering from the recession. Trump’s “positive” tweet contained an insult to mainstream media while boasting about the stock market that had been improving since before his presidency. Trump’s “positive” tweet contained an insult to mainstream media, while boasting about the stock market that had been improving since before his presidency.

Emphasis in Trump’s Tweets

From our semantic discourse analysis it is evident that Trump tweets on a large variety of topics and uses tweets for different functions. In the 300 tweets sample, we have found over 150 distinct topics (e.g., law enforcement, gas prices, etc.), and observed over seventy different types of expressions (e.g., self-praise, passive aggressiveness) and functions (some suggested functions by Van Dijk [1985] are questions, orders, promises, congratulations, and accusations).12 An overview of the ten most frequent topics and functions is highlighted in Table 3.

Table 3: Topics and Types of Expression in Trump’s Tweets (N = 300)

Topic Frequency Function of Expression Frequency Fake news 28 Accusation 61 Obamacare 11 Evaluation 32

Russia 11 Intention 22 Obama 10 Expectation 17 Clinton 9 Denying accusation 14

PR 9 Thanks 14 Democrats 8 Factual statement 14

Paris agreement 8 Blame 11 James Comey 8 Stating his reliability 9 North Korea 7 Optimism 8

Source: Elayan, Sykora, and Jackson 2020

It is noteworthy that as much as twenty percent of all his tweets (in the analyzed sample) are messages with an accusative tone. Moreover, it would seem these types of messages occurred significantly more frequently on certain days of the week (χ2(6) = 13.37, p<.05), in particular on Mondays and Tuesdays (see Table 4). This might be sensitive to idiosyncrasies over the specific six-week time period in the sample. It is not immediately evident how this observation may be explained, although there may be some connection to the 24/7 news cycle, or rather “news cyclone” as put forward by (Klinenberg 2005), where the time cycle for news making in the age of digital production can at times be radically different, erratic, and ongoing. Trump may well have tried to somehow game the traditional news cycle with some evidence that such constant news exposure was of benefit to his profile (Wells et al. 2016; Bitecofer 2018; Roozenbeek and Palau 2017).

12 Due to space constraints, given the large number of topics and expressions, we make the full list of these available online at https://servername_author_reference_redacted.ac.uk/trump_topics_and_expression_style/tables.html.

21

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 15: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

Table 4: Day of Week Break-down of Accusative Tone Messages Day of week

Monday Tuesday Wednesday Thursday Friday Saturday Sunday Total

Accusations 0 42 40 27 35 40 24 31 239 1 21 14 5 8 4 3 6 61

Total 63 54 32 43 44 27 37 300 Source: Elayan, Sykora, and Jackson 2020

The figure below summarizes the number of distinct topics and types of expressions over

the analyzed sample. This illustrates that Trump has the tendency to use a highly varied set of expressive styles and addresses a wide range of topics.

Figure 2: Total Number of Tweets, Distinct Topics, and Types of Messages (N = 300)

Source: Elayan, Sykora, and Jackson 2020

Another interesting pattern we observed was Trump’s use of capital letters with certain

words to amplify the message, especially when using an accusative tone (Hodges 2017; Kreis 2017). Enli (2017) has viewed Trump’s use of capital letters to emphasize his sincerity and engagement as a reflection of his authenticity. For instance:

SEE YOU IN COURT, THE SECURITY OF OUR NATION IS AT STAKE! (Trump 2017) Sorry folks, but if I would have relied on the Fake News of CNN, NBC, ABC, CBS, washpost or nytimes, I would have had ZERO chance winning WH. (Trump 2017) The Democrats have become nothing but OBSTRUCTIONISTS, they have no policies or ideas. All they do is delay and complain. They own ObamaCare! (Trump 2017) James Comey leaked CLASSIFIED INFORMATION to the media. That is so illegal! (Trump 2017)

Kreis’ (2017, 1) discourse analysis of Trump’s tweets found that Trump uses his informal, direct, and provocative tweets to create the “concept of a homogeneous people and a homeland threatened by the dangerous other.” We observed the same, and below are several examples to illustrate Kreis’ (2017) hypothesis.

0

10

20

30

5/28

/2…

5/30

/2…

6/1/20…

6/3/20

…6/5/20

…6/7/20

…6/9/20

…6/11

/2…

6/13

/2…

6/15

/2…

6/17

/2…

6/19

/2…

6/21

/2…

6/23

/2…

6/25

/2…

6/27

/2…

6/29

/2…

7/1/20

…7/3/20

…7/5/20

…7/7/20

…7/9/20

no. of tweets no. of topics no. of types

22

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 16: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

The judge opens up our country to potential terrorists and others that do not have our best interests at heart. Bad people are very happy! (Trump 2017) If the US. does not win this case as it so obviously should, we can never have the security and safety to which we are entitled. Politics! (Trump 2017) The crackdown on illegal criminals is merely the keeping of my campaign promise. Gang members, drug dealers & others are being removed! (Trump 2017) The weak illegal immigration policies of the Obama Admin. allowed bad MS 13 gangs to form in cities across US. We are removing them fast! (Trump 2017) Don’t let the fake media tell you that I have changed my position on the WALL. It will get built and help stop drugs, human trafficking etc. (Trump 2017)

iPhone vs. Android: What Device Was Trump Using?

The metadata collected with each tweet also provided information about the platform (indicative of the type of device) that from which the tweet was sent. Most of Trump’s tweets were sent from an iPhone (see Table 5).

Table 5: Platforms Tweets Were Sent from during the

Time Period March 7–August 26, 2017 Platform Frequency Percent Media Studio 38 4.5 Twitter Ads 33 3.9 Twitter for iPhone 757 90.4 Twitter Web Client 9 1.1 Total 837 100.0

Source: Elayan, Sykora, and Jackson 2020

However, before the of March 26, 2017, Trump’s account used an Android device as well; most tweets either came from the Android or iPhone device with a roughly equal split between the two (see Table 6).

Table 6: Platforms Tweets Were Sent from during the

Time Period March 17–26, 2017 Platform Frequency Percent Instagram 2 .1 Media Studio 1 .0 Periscope 1 .0 Twitter Ads 1 .0 Twitter for Android 1502 44.4 Twitter for iPad 22 .7 Twitter for iPhone 1574 46.6 Twitter Web Client 278 8.2 Total 3381 100.0

Source: Elayan, Sykora, and Jackson 2020

23

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 17: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

From the March 26, 2017 onward, Trump’s account entirely stopped using an Android device, and it was confirmed by his staff that Trump began using an iPhone (McCormick 2017). Hence this pattern cannot be observed any longer. Nevertheless, in terms of comparing between the Android and iPhone, we found that the hour and day of tweeting (see Figures 3 and 4) also differed significantly, with many more tweets being sent from the Android device over the weekend and in the morning. We postulate that this could have been due to Android tweets having been sent by Trump directly, whereas the iPhone tweets tended to be sent by his PR team. Others have also asserted that Trump likely used to tweet from an Android phone (Samsung Galaxy) whereas the iPhone tweets likely came from his PR staff (e.g., Robinson 2016). We also found that the Android tweets contained significantly more anger, sadness, disgust, and fear.

Figure 3: Time of Day that Tweets Were Sent (Android, N = 1502; iPhone, N = 1574) Source: Elayan, Sykora, and Jackson 2020

Figure 4: Day of Week that Tweets Were Sent (Android, N = 1502; iPhone, N = 1574) Source: Elayan, Sykora, and Jackson 2020

0

50

100

150

200

250

0:0

0

1:0

0

2:0

0

3:0

0

4:0

0

5:0

0

6:0

0

7:0

0

8:0

0

9:0

0

10

:00

11

:00

12

:00

13

:00

14

:00

15

:00

16

:00

17

:00

18

:00

19

:00

20

:00

21

:00

22

:00

23

:00

Android iPhone

0

50

100

150

200

250

300

350

Android iPhone

24

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 18: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

Psychometric Markers and Twitter Use by Trump’s Political Peers

A number of researchers have already employed the LIWC psycholinguistic tool to look at aspects of Trump’s use of words and language in general (e.g., Ahmadian, Azarshahi, and Paulhus 2017; Jordan and Pennebaker 2017), and the tool has been extensively employed in psycholinguistics research for investigating psychological and related personality traits (Tausczik and Pennebaker 2010). Figure 6 provides an overview of some main LIWC markers, reporting the average scores for the word occurrence categories broken down by the three time periods of Trump’s tweeting activity (@readDonaldTrump tweets, N = 4,218), during his campaign, transition, and first 200 days of presidency. The LIWC categories mentioned in Figure 5 are as follows:

cognitive processes (language related to insight, causal, discrepancies,certainty, etc.).

perceptual processes (language to do with seeing, hearing, or feeling). biological processes (words associated with the body, health, sexuality,

eating/drinking). drives (words associated with affiliation, achievement, power, reward, and

risk). social (words associated with family, friends, female, or male genders). personal concerns (work, leisure, money, religion, etc.). time orientation (language associated with either the past, present, or future). relativity (words related to motion, space, and time). informal language (swearwords, net speak, etc.). function words (pronouns, personal pronouns, articles, conjunctions,

prepositions, etc.). all punctuation (characters such as ‘.’ ‘;’ ‘-’ ‘?’ ‘!’ etc.).

As expected, the LIWC markers were fairly consistent across the various time periods, although there were some pronounced differences. For instance, on average the use of punctuation in Trump’s tweets has declined, as well as informal language use, and the use of time-oriented language, have all declined. At the same time, the proportion of language associated with drives, personal concerns, and relativity has shown some increase in use. There are some intuitive interpretations behind these empirical observations, although it must be acknowledged these are indicative differences only, yet for instance “drives” relates to words expressing reward/risk, achievement, power, etc., which to some extent reflects Trump’s current presidential role. The dashed line in Figure 5 represents the average LIWC scores for the entire PTDC corpus. What is interesting here are the outliers against Trump’s tweets. Trumps tweets contain significantly less language use related to personal concerns. As opposed to findings by Ahmadian, Azarshahi, and Paulhus (2017), who suggest that Trump uses highly informal language, we actually observed that in relation to a large collection of political peers (i.e., PTDC corpus), his language contains a much smaller amount of informal language and punctuation within the first 200 days of being president.

25

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 19: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

Figure 5: LIWC-based Markers (Used Widely in Psycholinguistics Research), Average Scores

Source: Elayan, Sykora, and Jackson 2020

Previous findings using LIWC have also shown that Trump’s use of language is significantly low on the analytical thinking dimension, although these results have not been extended to Twitter and no other work has yet put this in relation to his peers (i.e., US state governors, members of the US Senate and Congress) language use on Twitter. Hence, here we attempt to see whether this extends to Twitter and how some of the central dimensions of language differ across his political peers’ accounts.

Figure 6: LIWC Central Dimensions of Language, Average Scores

Source: Elayan, Sykora, and Jackson 2020

26

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 20: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

Analytic thinking, clout, authenticity, and tone are four summary variables computed by the LIWC system and are represented as standardized scores that have been converted to percentiles (LIWC 2018). Figure 6 highlights and confirms the significant difference between Trump and his peers in using lower levels of analytic thinking style language as reported by Jordan and Pennebaker (2017). According to Pennebaker et al. (2015) using words indicating hierarchical thinking patterns implies analytical thinking. Clout is when an individual uses terms to convey their social status to others, demonstrating confidence or leadership (Kacewicz et al. 2014). Authenticity is when an individual uses terms that indicate that they are vulnerable, humble, and human (Pennebaker et al. 2015). We extended Jordan and Pennebaker’s (2017) initial finding to Twitter and found that among current political contemporaries Trump’s language still scores low on the analytic dimension, despite Jordan and Pennebaker’s (2017) pointing out an overall historically lowering trend in the analytic score (i.e., as analyzed on State of the Union addresses over time and across presidents). Only in the first 200 days of presidency, could we observe a slight increase in Trump’s analytic score within his tweets. The clout score is also noteworthy; the average use across Trump’s tweeting during the three time periods is 7.5 points below the PTDC-based score. This score tends to be higher when one employs language that implies confidence and a sense of certainty and uses social words such as “ally,” “friends,” or “group.” For this reason it is interesting that we observe this score to be considerably lower for Trump than his peers. Finally, the authentic score on average is only 4.5 points higher in Trump’s tweets as compared to the PTDC-based score. Over the entire dataset under analysis this indicated that Trump’s language on Twitter portrays authenticity by employing personal language, such as using higher levels of first-person singular pronouns and fewer words like “should” (Pennebaker 2011; LIWC 2018).

Does Trump’s Tweeting Affect the Media?

Finally, we undertook a rudimentary study of the frequency in which a small selection of mainstream media outlets reported about Trump’s Twitter activity. We adopted Thelwall’s (2009) recommended method of web impact assessment from the field of Webometrics for a comparative analysis of small selection of (American and non-American) mainstream news outlets and their coverage of different politicians’ individual tweets. In particular, we used Google search13 to identify the number of unique articles on news outlets that explicitly mention the name of a politician tweeting, e.g., site:theguardian.com “donald trump tweeted”, which would identify all the unique links to pages mentioning the quoted phrase. Thelwall (2009) explained that the evidence generated by this method is clearly indicative of web impact and frequently employed in Webometrics to measure such instances of content across the web. Our findings suggested that Trump’s tweets do not only generate reactions (likes, retweets) on Twitter itself: the sample of mainstream news outlets we examined were also giving unprecedented attention to Trump’s avid tweeting. This supported the work by Wells et al. (2016) that pointed out that Trump is aware of the media coverage given to his tweets and purposefully unleashes a “tweetstorm” to gain more media coverage. This observation is in line with traditional empirical communications research as first articulated by Lippmann (1922), in that controversy generates more attention and potentially triggers more reactions across the public.

We chose two major American news outlets on two opposite sides of the political spectrum (as per the left-right political ideological stance scale, see Castles and Mair 1984), CNN and FOX news, and two newspapers, The New York Times and The New York Post, broadly perceived as leaning left and right respectively, to compare their coverage of the tweets of the current and the former US presidents. We found, as illustrated in Table 7, that President Trump’s tweeting was given a substantial amount of attention, especially in comparison with 13 https://support.google.com/websearch/answer/2466433?hl=en

27

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 21: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

former President Obama. To provide further context, we also examined the Twitter content coverage of these two politicians on two mainstream British news outlets and three Arab mainstream news outlets (in Arabic)14 to gain some indication of whether Trump’s tweeting was potentially influencing the media internationally (see Table 7). We hope that our research will help pave the way for other researchers to examine how political leaders’ social media use influences different media outlets in both English and non-English speaking countries.

Arguably, former President Obama’s Twitter activity was not as controversial or as frequent as President Trump’s. We conducted the same study on UK politicians. At the time we conducted our analysis, Theresa May was Prime Minister of the UK and was communicating through Twitter almost daily, often several times a day from two different Twitter accounts (@10DowningStreet and @theresa_may). Her Twitter activity has been discussed less than forty times on The Guardian, and either discussed very little or not discussed at all on the other news outlets we examined. Online media coverage of other British politicians’ Twitter activity is so sparse that it is not worth reporting. We also examined the frequency in which Al-Ghad (an Arabic Jordanian news outlet) reported about King Abdallah and Queen Rania’s tweets, and found that there was very little coverage regarding their Twitter activity (in fact at the time of this study the King’s tweets were only reported on eight times as opposed to Trump’s 442. AlJazeera Arabic showed no results at all for Queen Rania but three instances for Trump). Overall, this evidence suggests that President Trump’s Twitter use and significance of specific individual tweets is not only influencing the media internationally but his Twitter activity is gaining more attention and influencing the media more than politicians in their own countries.

Table 7: Online News Coverage of Donald Trump and Barak Obama’s Twitter Activity

Media Outlet Number of Articles Discussing a

Tweet Trump Obama

New York Post 18 30 New York Times 337 41

CNN 5,840 266 Fox News 2,560 56

BBC 569 21 The Guardian 1,050 96

alGhad (Arabic Newspaper) 442 2 alJoumhouria (Arabic Newspaper) 52 1

AlJazeera (Arabic) AlJazeera (English)

3 108

0 4

Source: Elayan, Sykora, and Jackson 2020

Bitecofer (2018) pointed out that during the presidential campaign Trump relied on his frequent “controversial” statements on Twitter, an unprecedented and very public way for a presidential candidate to dominate the news cycle continuously. As far as we are aware, we are the first to employ a Webometrics method to assess this frequently reported observation (e.g., Wells et al. 2016; Roozenbeek and Palau 2017). The search counts reported in Table 7 indicate that direct coverage of individual tweets across the considered news outlets is substantially swayed toward Trump’s tweets. Notwithstanding, the evidence provided must be taken with reservation and as indicative, as we were not able to explore a systematic range of news sources nor a comprehensive cohort of political leaders with prolific Twitter accounts, and hence accounting 14 We chose this language because it is spoken in over twenty countries and because of the ongoing political conflicts in a number of Arabic speaking countries. Further work would include investigating news reports about Trump’s Twitter activity in other languages.

28

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 22: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

for variety of potential confounding factors must be done. Work based around empirical Webometrics methods (i.e., Thelwall 2009) to this end represents an interesting and important area of potential future work.

Discussion

Our work, among a number of findings, presents a first systematic effort to address Trump’s communicative style in terms of his use of language and emotional expression across noteworthy time periods (his campaign, leading up to and including first 200 days of presidency). During the three time frames (campaign, transition, and 200 days of presidency), out of all of Trump’s tweeting activity, between five and nine percent consisted of retweets. In contrast to this, the official presidential Twitter account handle @POTUS (i.e., President of the United States) that was handed over to the Trump’s presidency from Obama’s administration, consisted of over fifty-nine percent retweeted content (i.e., 582 tweets out of all 981 tweets), and interestingly a substantial amount of these were retweets of messages from Trump’s own personal @realDonaldTrump account. The @POTUS account is considerably less active, and over the 200 days (January 20, 2017 to August 7, 2017), on as many as twenty-four days there were no tweets at all, compared to Trump’s personal account with only two inactive days over the same time period. Although not conclusive, the evidence does point toward Trump using his own account, while his staff is likely managing the @POTUS account. Hence our analysis in this study focused on Trump’s primary @realDonaldTrump account. The rest of this section outlines and discusses some of our findings and potential implications of the presented research.

Persily (2017, 64) described the Donald Trump campaign as “unprecedented in its breaking of established norms of politics.” Trump has also been accused of not behaving in a presidential manner by violating social norms (e.g., Azari 2016). Hence, among other things, we aimed to address how Trump’s Twitter use has changed from campaign, transition, and his days of presidency. Notably, our analysis finds that Trump’s tweets contained a higher percentage of emotionally charged language use than average—with “happiness” being the most prominent emotion conveyed. We also find that popular tweets (i.e., retweets and favorite counts) on average contain expressions of negative sentiment, relying on two automated tools, EMOTIVE and Vader. A subsequent manual semantic discourse analysis, which focused on a sample of under two months of tweets during Trump’s presidency confirmed these correlations (Table 2 [ρ = -.275 retweet count and ρ = -.198 favourite count, both p<.01 with respect to negative sentiment]). Considerations around automated computational tools, with regards to their limitations and the role of the semantic manual discourse analysis, were discussed at length in the “Analysis Methods” section. It is hoped our study provides an informative use case in this regard. A noteworthy observation here is that as much as twenty percent of all (N = 300 sample) tweets carried an accusative tone, with the highest correlation in Table 2 between retweet count and styles of expression (ρ = .284, p<.01). What this implies is that not only does Trump often seem to use accusations as a functional communicative tool but also that the more functional styles of expression he mixes and uses within a single tweet (such as accusation, blame, or expressions of intent), the more frequently the tweet gets retweeted. Moreover, we found that tweets charged with an accusative tone were more likely to be sent on Mondays and Tuesdays than would be expected by random chance. We were not able to substantiate any evidence as to reasons for such behavior; one possible explanation could be that Trump may well consciously be attempting to play into the news cycle with respect to the week’s news reporting around his tweets (Klinenberg 2005; Bitecofer 2018). Using EMOTIVE, we also confirm the previously reported difference in Trump’s use of the Android vs. iPhone devices when tweeting, additionally finding a significant difference in emotional language use, specifically language of anger, disgust, fear and sadness, being communicated more frequently from the Android device.

29

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 23: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

Our study of president Donald Trump’s tweeting style led us to conclude that the majority of his tweets are either negative or include negative connotations even when they contain positive words. He uses simple vocabulary that generally appeals to voters, as reported by Theommes and Conway (2007). As pointed out, our research has shown that Trump’s Twitter followers mostly respond to his negative tweets, and as Wells et al. (2016) have found, the media will respond to Twitter users’ negative responses to the president’s tweets. Employing a webometrics method, we were able to indicatively quantify that Trump’s individual tweets tended to be covered by mainstream news outlets more frequently than a selection of prominent individuals who were also frequent tweeters. There seems to be a proportionally higher than average reporting on Trump’s tweeting. Azari (2016) hypothesizes that the media’s amplification of Trump’s overall message indirectly may have led to his success in his campaign. Interestingly, other empirical evidence across a separate social media community, Reddit, points to the fact that even with negative media coverage, Trump’s popularity, as compared to other candidates during the campaign, did not seem to wane (Roozenbeek and Palau 2017). Nyhan (2015) explained that Trump played on US voters’ desires for an assertive leader, and although a president cannot affect policy by him/herself, the media’s repetition of Trump’s narrative arguably led to his victory. Although Azari (2016) and Nyhan (2015) discussed Trump’s overall narrative tweeting, we argue that the media’s undue attention to his tweeting could contribute to unforeseen consequences, such as his type of rhetoric becoming the new norm for politicians, although more research is needed before such an assertion can be further substantiated. On Twitter itself, we found that during Trump’s first 200 days in office, as opposed to the campaign and transition periods combined, the overall attention given to his tweets has significantly increased, in terms of other users “liking” (ninety-three percent increase) and retweeting (eighteen percent increase) his tweets. Favorites, now known as likes on Twitter, had historically been used by Twitter users as a way to save tweets for later (Gorrell and Bontcheva 2016), effectively a form of bookmarking interesting tweets rather than necessarily liking the content, even though the action of marking a tweet as favorite was a public one. Hence, the above observation is certainly open to a range of interpretations, one possibly pointing to the Twittersphere user base “bookmarking” Trump’s average tweets more often since he has become president.

While the psychometric computational LIWC tool has been used extensively to analyze Trump’s language and word choice in general (e.g., Ahmadian, Azarshahi, and Paulhus 2017; Jordan and Pennebaker 2017), we extended and built on Jordan and Pennebaker’s (2017) findings on the State of the Union addresses across presidents to extend their findings to social media. Particularly, we found that Trump’s Twitter language scored substantially lower on analytical thinking style language than his current political contemporaries’ tweets (PTDC corpus).

Besides the reported findings and four research questions addressed in this study, more broadly, we believe it important that a president’s unprecedented use of a social media platform is characterized, archived, and reported within the academic community, which will hopefully stimulate and facilitate further ongoing research and debate around political leaders’ use of new media in potentially disruptive ways. To conclude, in a recent paper, Weeks and Gil de Zúñiga (2019) outlined six priority areas for future research in the study of political communication, specifically in political misinformation, with the fifth area being highlighted as the important role communicated emotions on social media likely play. As summarized in Weeks and Gil de Zúñiga (2019), the role of emotions in social media communication, especially by politicians and elites, deserves further study, and it is hoped our work can help contribute to this broader aim.

30

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 24: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

Conclusion

This study used a combination of three state-of-the-art computational tools to analyze sentiment (Vader), emotions (EMOTIVE), and psycholinguistic features (LIWC) in Trump’s tweets, as well as a manual semantic discourse analysis to investigate the related frequency of implied and mentioned topics and functional styles of expressions, which often consisted of direct accusations. We carefully considered a number of methodological issues around the application of computational tools. This study is the first to investigate Trump’s emotional language use on Twitter systematically, across three distinct noteworthy time periods: campaign, transition, and his first 200 days of presidency. Our findings show a higher-than-expected use of emotional language, with some specific differences across time periods. Trump’s tweets have also been retweeted and liked substantially more frequently since becoming president, while negative tweets in general tended to be more popular. Following previous researchers’ work on using LIWC to analyze presidential word and language use in general, we confirmed and extended prior findings onto the social media platform Twitter. Our results showed that Trump’s language is significantly different than that of his contemporary political peers (US state governors, members of the US Senate and Congress [PTDC corpus]) in using lower levels of analytical-thinking style language on Twitter specifically. This is despite an overall historically lowering trend in the analytic score as analyzed over State of the Union addresses across presidents. Although previous work suggested that Trump uses informal language, we observed that in relation to a large collection of political peers, his language contains a smaller amount of informal language across all time periods, including the first 200 days of presidency. We also found that the small sample of media outlets we explored do report on Trump’s tweets more frequently than the leaders of their own countries. The consequences of this amplification of Trump’s tweeting may have unforeseen consequences, such as his type of rhetoric becoming the norm for politicians.

In conclusion, Twitter allows a glimpse into the opinions and emotions of a world leader, which is unprecedented and is a result of the recent and immense popularity of social media. President Trump’s tweeting habits have given us a peek at his personality traits, wider psycholinguistic characteristics, including his use of emotional language. Trump’s presidency has been defined by its Twitter dimension, and the dataset generated by his tweets and Twitter activity surrounding them is incredibly interesting to study in the context of contemporary world events. From a data-science perspective, it is compelling to be able to examine the current US president due to his tendency of expressing himself so vocally on Twitter. Nevertheless, as Sean Spicer, his then White House press secretary put it, “Trump’s tweets speak for themselves.”

REFERENCES

Ahmadian, Sara, Sara Azarshahi, and Delroy L. Paulhus. 2017. “Explaining Donald Trump Via Communication Style: Grandiosity, Informality, and Dynamism.” Personality and Individual Differences 107:49–53. https://doi.org/10.1016/j.paid.2016.11.018.

Azari, Julia. R. 2016. “How the News Media Helped to Nominate Trump.” Political Communication 33 (4): 677–80. https://doi.org/10.1080/10584609.2016.1224417.

Bitecofer, Rachel. 2018. The Unprecedented 2016 Presidential Election. London: Palgrave Macmillan. https://doi.org/10.1007/978-3-319-61976-7.

Bryan, Bob. 2017. “Trump: Everyone Is ‘Getting Rich’ from the Stock Market Except for Me.” Business Insider, July 6, 2017. https://www.businessinsider.com/trump-speech-in -poland-on-stock-market-record-high-unemployment-2017-7.

Castles, Francis G., and Peter Mair. 1984. “Left–Right Political Scales: Some ‘Expert’ Judgments.” European Journal of Political Research 12 (1): 73–88. https://doi.org/10.1111/j.1475-6765.1984.tb00080.x.

31

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 25: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

Chung, Cindy, and James W. Pennebaker. 2007. “The Psychological Functions of Function Words.” In Frontiers of Social Psychology, edited by K. Fielder, 343–59. New York: Psychology Press.

Enli, Gunn. 2017. “Twitter as Arena for the Authentic Outsider: Exploring the Social Media Campaigns of Trump and Clinton in the 2016 US Presidential Election.” European Journal of Communication 32 (1): 50–61. https://doi.org/10.1177/0267323116682802.

Giroux, Henry A. 2017. “White Nationalism, Armed Culture and State Violence in the Age of Donald Trump.” Philosophy & Social Criticism 43 (9): 887–910. https://doi.org/10.1177/0191453717702800.

Gorrell, Genevieve, and Kalina Bontcheva. 2016. “Classifying Twitter Favorites: Like, Bookmark, or Thanks?” Journal of the Association for Information Science and Technology 67 (1): 17–25. https://doi.org/10.1002/asi.23352.

Gross, Justin H., and Kaylee T. Johnson. 2016. “Twitter Taunts and Tirades: Negative Campaigning in the Age of Trump.” PS: Political Science & Politics 49 (4): 748–54. https://doi.org/10.1017/S1049096516001700.

Hodges, A., 2017. “Trump Echoes Bush in Middle East Visit.” Anthropology News 58 (3): e299–e303. https://doi.org/10.1111/AN.467.

Hutto, C. J., and Eric Gilbert. 2014. “Vader: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text.” Eighth ICWSM-International AAAI Conference on Weblogs and Social Media, Ann Arbor, US.

Jordan, Kayla N., and James W. Pennebaker. 2017. “The Exception or the Rule: Using Words to Assess Analytic Thinking, Donald Trump, and the American Presidency.” Translational Issues in Psychological Science 3 (3): 312–16. https://doi.org/10.1037/tps0000125.

Kacewicz, Ewa, James W. Pennebaker, Matthew Davis, Moongee Jeon, and Arthur C. Graesser. 2014. “Pronoun Use Reflects Standings in Social Hierarchies.” Journal of Language and Social Psychology 33 (2): 125–43. https://doi.org/10.1177/0261927X13502654.

Kim, Hwalbin, S. Mo Jang, Sei-Hill Kim, and Anan Wan. 2018. “Evaluating Sampling Methods for Content Analysis of Twitter Data.” Social Media+Society 4 (2). https://doi.org/10.1177/2056305118772836.

Klinenberg, Eric. 2005. “Convergence: News Production in a Digital Age.” The Annals of the American Academy of Political and Social Science 597 (1): 48–64.

Kollanyi, Bence, Philip N. Howard, and Samuel C. Woolley. 2016. “Bots and Automation over Twitter during the First U.S. Presidential Debate.” COMPROP Data Memo 2016. The Computational Propaganda Project. https://comprop.oii.ox.ac.uk/research/working -papers/bots-and-automation-over-twitter-during-the-u-s-election.

Kreis, Ramona. 2017. “The ‘Tweet Politics’ of President Trump.” Journal of Language and Politics 16 (4): 607–18.

Lawrence, Regina G., and Amber E. Boydstun. 2017. “What We Should Really Be Asking about Media Attention to Trump.” Political Communication 34 (1): 150–53. https://doi.org/10.1080/10584609.2016.1262700.

Le, Gem. M., Kate Radcliffe, Courtney Lyles, Helena C. Lyson, Byron Wallace, George Sawaya, Rena Pasick, Damon Centola, and Urmimala Sarkar. 2019. “Perceptions of Cervical Cancer Prevention on Twitter Uncovered by Different Sampling Strategies.” PloS ONE 14 (2): e0211931. https://doi.org/10.1371/journal.pone.0211931.

Learmonth, Michael. 2008. “One-way Media Lost the Election as Cable, Interactive Dominated.” Advertising Age, November 10, 2008. https://adage.com/article/media /media-lost-election-cable-interactive-dominated/132350.

Lippmann Walter. 1922. Public Opinion. New York: Harcourt, Brace and Company. LIWC. 2018. “Interpreting LIWC Output.” Accessed August 30, 2018.

http://liwc.wpengine.com/ interpreting-liwc-output.

32

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 26: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

Marcus, Gary, and Ernest Davis. 2019. Rebooting AI: Building Artificial Intelligence We Can Trust. New York: Pantheon Books.

McCormick, Rich. 2017. “Donald Trump is using an iPhone Now.” The Verge, March 29, 2017. https://www.theverge.com/2017/3/29/15103504/donald-trump-iphone-using-switched-android.

Nyhan, Brendan. 2015. “Donald Trump, the Green Lantern Candidate.” The Upshot, The New York Times, August 25, 2015. http://www.nytimes.com/2015/08/26/upshot/Donald -trump-the-green-lantern-candidate.html.

Ott, Brian L. 2017. “The Age of Twitter: Donald J. Trump and the Politics of Debasement.” Critical Studies in Media Communication 34 (1): 59–68. https://doi.org/ 10.1080/15295036.2016.1266686.

Paletta, Damian, and Ana Swanson. 2017. “The Economy President Trump Loves Looks a Lot Like the One Candidate Trump Hated.” Washington Post, July 4, 2017. https://www.washingtonpost.com/news/wonk/wp/2017/07/03/the-economy-president-trump-loves-looks-a-lot-like-the-one-candidate-trump-hated/?utm_term = .362ff7dd7da5.

Pennebaker, James W. 2011. The Secret Life of Pronouns: What Our Words Say About Us. NY: Bloomsbury.

Pennebaker, J.W., Rayan L. Boyd, Kayla Jordan, and Kate Blackburn. 2015. The Development and Psychometric Properties of LIWC2015. Austin: University of Texas at Austin.

Persily, Nathaniel. 2017. “The 2016 US Election: Can Democracy Survive the Internet?” Journal of Democracy 28 (2): 63–76. https://www.journalofdemocracy.org /articles/the-2016-u-s-election-can-democracy-survive-the-internet.

Puschmann, Cornelius, and Alison Powell. 2018. “Turning Words into Consumer Preferences: How Sentiment Analysis Is Framed in Research and the News Media.” Social Media + Society 4 (3). https://doi.org/10.1177/2056305118797724.

Ribeiro, Filipe N., Matheus Araújo, Pollyanna Gonçalves, and Fabricio Benevenuto. 2016. “SentiBench-A Benchmark Comparison of State-of-the-Practice Sentiment Analysis Methods.” EPJ Data Science 5 (23). https://doi.org/10.1140/epjds/s13688-016-0085-1.

Riffe, Daniel, Stephen Lacy, Frederick Fico, and Brendan Watson. 2014. Analyzing Media Messages: Using Quantitative Content Analysis in Research. 3rd Edition. New York: Routledge Press.

Robinson, David. 2016. “Text Analysis of Trump’s Tweets Confirms He Writes Only the (Angrier) Android Half.” VarianceExplained (Blog), August 9, 2016. http://varianceexplained.org/r/trump-tweets.

Roozenbeek, Jon, and Adri`a Salvador Palau. 2017. “I Read It on Reddit: Exploring the Role of Online Communities in the 2016 US Elections News Cycle.” International Conference on Social Informatics 10540 (1): 192–220. https://doi.org/10.1007/978-3-319-67256-4_16.

Rosenberg, Stanley, Paula P. Schnurr, and Thomas E. Oxman. 1990. “Content Analysis: A Comparison of Manual and Computerized Systems.” Journal of Personality Assessment 54 (1–2): 298–310. https://doi.org/10.1207/s15327752jpa5401&2_28.

Ross, Andrew S., and Damian J. Rivers. 2018a. “Discursive Deflection: Accusation of ‘Fake News’ and the Spread of Mis- and Disinformation in the Tweets of President Trump.” Social Media + Society 4 (2). doi.org/10.1177/2056305118776010.

Ross, Andrew S., and Damian J. Rivers. 2018b. “Political Twitter Discourse Corpus (PTDC).” https://www.damianrivers.com/ptdc.

Struyk, Ryan. 2017. “Note to Trump: The Stock Market Has Hit an All-time High in 30 of the Last 54 Months.” CNN, August 2, 2017. http://edition.cnn.com/2017/08/02 /politics/stock-market-highs-months/index.html.

33

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 27: His tweets speak for themselves: an analysis of Donald ...

THE INTERNATIONAL JOURNAL OF INTERDISCIPLINARY CIVIC AND POLITICAL STUDIES

Sykora, Martin, Thomas W. Jackson, Ann O’Brien, and Suzanne Elayan. 2013. “Emotive Ontology: Extracting Fine-grained Emotions from Terse, Informal Messages.” International Journal on Computer Science and Information Systems 8 (2): 106–18.

Sykora, Martin, Thomas W. Jackson, Ann O’Brien, Suzanne Elayan, and Alexander Lunen. 2014. “Twitter Based Analysis of Public, Fine-grained Emotional Reactions to Significant Events.” European Conference on Social Media–ECSM 2014, Brighton, United Kingdom.

Tausczik, Yla R., and James W. Pennebaker. 2010. “The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods.” Journal of Language and Social Psychology 29 (1): 24–54. https://doi.org/10.1177/0261927X09351676.

Thelwall, Michael. 2009. “Introduction to Webometrics: Quantitative Web Research for the Social Sciences.” Synthesis Lectures on Information Concepts, Retrieval, and Services 1 (1): 1–116. https://doi.org/10.2200/S00176ED1V01Y200903ICR004.

Thoemmes, Felix J., and Lucian Gideon Conway. 2007. “Integrative Complexity of 41 US Presidents.” Political Psychology 28 (2): 193–226. https://doi.org/10.1111/j.1467-9221.2007.00562.x.

Trump, Donald (@realDonaldTrump). 2017. “Don’t let the fake media tell you that I have changed my position on the WALL. It will get built and help stop drugs, human trafficking etc.” April 25, 2017, 1:36pm. https://twitter.com/realDonaldTrump /status/856849388026687492.

———. 2017. “Dow hit a new intraday all-time high! I wonder whether or not the Fake News Media will so report?” July 3, 2017, 10:10pm. https://twitter.com/realDonaldTrump /status/881983493533822976.

———. 2017. “I just finished a great meeting with the Republican Senators concerning HealthCare. They really want to get it right, unlike OCare!” June 27, 2017, 11:27pm. https://twitter.com/realDonaldTrump/status/879828637733793793.

———. 2017. “If the US. does not win this case as it so obviously should, we can never have the security and safety to which we are entitled. Politics!” February 8, 2017, 12:03pm. https://twitter.com/realDonaldTrump/status/829299566344359936.

———. 2017. “James Comey leaked CLASSIFIED INFORMATION to the media. That is so illegal!” July 10, 2017, 11:40am. https://twitter.com/realDonaldTrump /status/884361623514656769.

———. 2017. “Law enforcement & military did a spectacular job in Hamburg. Everybody felt totally safe despite the anarchists. @PolizeiHamburg #G20Summit.” July 8, 2017, 7:50pm. https://twitter.com/realDonaldTrump/status/883760134333378561.

———. 2017. “Proud to welcome our great Cabinet this afternoon for our first meeting. Unfortunately 4 seats were empty because Senate Dems are delaying!” March 13, 2017, 8:57pm. https://twitter.com/realDonaldTrump/status/841392683625172992.

———. 2017. “SEE YOU IN COURT, THE SECURITY OF OUR NATION IS AT STAKE!” February 9, 2017, 11:35pm. https://twitter.com/realDonaldTrump /status/829836231802515457.

———. 2017. “Sorry folks, but if I would have relied on the Fake News of CNN, NBC, ABC, CBS, washpost or nytimes, I would have had ZERO chance winning WH.” June 6, 2017, 1:15pm. https://twitter.com/realDonaldTrump/status/872064426568036353.

———. 2017. “Thank you for all of the nice statements on the Press Conference yesterday. Rush Limbaugh said one of greatest ever. Fake media not happy!” Twitter, February 17, 2017, 11:43am. https://twitter.com/realDonaldTrump/status/832555987299082242.

———. 2017. “The crackdown on illegal criminals is merely the keeping of my campaign promise. Gang members, drug dealers & others are being removed!” February 12, 2017, 11:34am. https://twitter.com/realDonaldTrump/status/830741932099960834.

34

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 28: His tweets speak for themselves: an analysis of Donald ...

ELAYAN ET AL.: “HIS TWEETS SPEAK FOR THEMSELVES”

———. 2017. “The Democrats have become nothing but OBSTRUCTIONISTS, they have no policies or ideas. All they do is delay and complain. They own ObamaCare!” June 26, 2017, 11:30pm. https://twitter.com/realDonaldTrump/status/879315860178993152.

———. 2017. “The judge opens up our country to potential terrorists and others that do not have our best interests at heart. Bad people are very happy!” February 5, 2017, 12:48am. https://twitter.com/realDonaldTrump/status/828042506851934209.

———. 2017. “The weak illegal immigration policies of the Obama Admin. allowed bad MS 13 gangs to form in cities across US. We are removing them fast!” April 18, 2017, 10:39am. https://twitter.com/realDonaldTrump/status/854268119774367745.

Twitter. 2017. “Twitter Developer Documentation: REST APIs.” August 8, 2017. https://web.archive.org/web/20170812202449/https://dev.twitter.com/rest/public.

Van Dijk, Teun A. 1985. “Semantic Discourse Analysis.” Handbook of Discourse Analysis, edited by Teun van Dijk, 103–36. London: Academic Press London.

Wang, Yu, Yuncheng Li, and Jiebo Luo. 2016. “Deciphering the 2016 US Presidential Campaign in the Twitter Sphere: A Comparison of the Trumpists and Clintonists.” arXiv. https://arxiv.org/abs/1603.03097.

Wang, Yu, Jiebo Luo, Richard Niemi, Yuncheng Li, and Tianran Hu. 2016. “Catching Fire via ‘Likes’: Inferring Topic Preferences of Trump Followers on Twitter.” Tenth International AAAI Conference on Web and Social Media, Cologne, Germany.

Weeks, Brian E., and Homero Gil de Zúñiga. 2019. “What’s Next? Six Observations for the Future of Political Misinformation Research.” American Behavioral Scientist. https://doi.org/10.1177/0002764219878236.

Wells, Chris, Dhavan V. Shah, Jon C. Pevehouse, JungHwan Yang, Ayellet Pelled, Frederick Boehm, Josephine Lukito, Shreenita Ghosh, and Jessica L. Schmidt. 2016. “How Trump Drove Coverage to the Nomination: Hybrid Media Campaigning.” Political Communication 33 (4): 669–76. https://doi.org/10.1080/10584609.2016.1224416.

Zamith, Rodrigo, and Seth C. Lewis. 2015. “Content Analysis and the Algorithmic Coder: What Computational Social Science Means for Traditional Modes of Media Analysis.” The ANNALS of the American Academy of Political and Social Science 659 (1): 307–18. https://doi.org/10.1177/0002716215570576.

Zarella, Dan. 2016. “Is 22 Tweets-per-Day the Optimum?” Hubspot Blog, October 20, 2016. https://blog.hubspot.com/blog/tabid/6307/bid/4594/is-22-tweets-per-day-the -optimum.aspx.

ABOUT THE AUTHORS

Dr. Suzanne Elayan: Research Fellow, Centre for Information Management, Loughborough University, Loughborough, Leicestershire, UK

Dr. Martin Sykora: Associate Professor, Centre for Information Management, Loughborough University, Loughborough, Leicestershire, UK

Dr. Tom Jackson: Professor, Centre for Information Management, Loughborough University, Loughborough, Leicestershire, UK

35

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC

Page 29: His tweets speak for themselves: an analysis of Donald ...

ISSN 2327-0071

The International Journal of Interdisciplinary Civic and Political Studies is one of six thematically focused journals that support the Interdisciplinary Social Sciences Research Network. The Research Network is comprised of a journal collection, book imprint, conference, and online community.

The journal presents studies that exemplify the disciplinary and interdisciplinary practices of the social sciences. As well as articles of a traditional scholarly type, this journal invites case studies that take the form of presentations of practice—including documentation of socially engaged practices and exegeses analyzing the effects of those practices.

The International Journal of Interdisciplinary Civic and Political Studies is a peer-reviewed, scholarly journal.

Dow

nloa

ded

on M

on O

ct 1

9 20

20 a

t 08:

45:0

8 U

TC