Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why...

21
Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby, Furkan Alaca, Sonia Chiasson School of Computer Science Carleton University, Ottawa, Canada Abstract We have conducted a user study to assess whether improved browser security indicators and increased awareness of phishing have led to users’ improved ability to protect themselves against such attacks. Participants were shown a series of websites and asked to identify the phishing websites. We use eye tracking to obtain objective quantitative data on which visual cues draw users’ attention as they determine the legitimacy of websites. Our results show that users successfully detected only 53% of phishing websites even when primed to identify them and that they generally spend very little time gazing at security indicators compared to website content when making assessments. However, we found that gaze time on browser chrome elements does correlate to increased ability to detect phishing. Interestingly, users’ general technical proficiency does not correlate with improved detection scores. Keywords: phishing, eye tracking, usable security, user study 1. Introduction An important aspect of online security is to protect users from fraudulent websites and phishing attacks. Phishing is a “criminal mechanism employing both so- cial engineering and technical subterfuge to steal con- sumers’ personal identity data and financial account cre- dentials” [2]. While advances in the automated detec- tion of phishing websites have resulted in improved se- curity, these automated means are not fool-proof and users must be vigilant in protecting themselves in this arms race [21]. According to the Anti-Phishing Work- ing Group, phishing attacks remain widespread: 42,890 unique phishing websites were reported in December 2013, with the financial and online payments sectors ac- counting for nearly 80% of targeted industries [2]. Modern web browsers provide tools to assist users in making informed security decisions. For example, * Sonia Chiasson is corresponding author. PREPRINT. Final version: doi:10.1016/j.ijhcs.2015.05.005 Citation: Mohamed Alsharnouby, Furkan Alaca, and Sonia Chiasson. Why phishing still works: User strategies for combating phishing at- tacks. International Journal of Human-Computer Studies, 82:69-82, 2015 Email address: [email protected], [email protected], [email protected] (Mohamed Alsharnouby, Furkan Alaca, Sonia Chiasson) visual indicators within the URL bar and the SSL pad- lock have been designed to allow users to judge the le- gitimacy of websites. Unfortunately, these indicators have been only partially successful at helping to pre- vent phishing. Poor usability may allow phishing web- sites to masquerade as legitimate websites and deceive users into divulging their personal information. Earlier browser security indicators have been shown in previ- ous studies to be ineective, putting users at a higher risk of falling victim to phishing attacks [45, 32, 15]. This is compounded by the fact that security is a sec- ondary task for most users [47]. Users who are concen- trating on the real purpose of their online interaction, such as making a purchase, are unlikely to notice secu- rity indicators. Furthermore, some security indicators are visible only when the website is secure. The ab- sence of a security indicator, as is possible with phish- ing websites, is even less likely to be noticed by users. Therefore, developing usable browser security cues to combat phishing attacks remains an important and un- solved problem in usable security, as is understanding how users make determinations about the legitimacy of websites [38]. To inform the design of improved techniques against phishing, we explored the strategies employed by users to identify phishing attacks. We showed participants a Preprint submitted to International Journal of Human-Computer Studies June 23, 2015

Transcript of Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why...

Page 1: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

Why phishing still works: user strategiesfor combating phishing attacks

Mohamed Alsharnouby, Furkan Alaca, Sonia Chiasson

School of Computer Science

Carleton University, Ottawa, Canada

Abstract

We have conducted a user study to assess whether improved browser security indicators and increased awarenessof phishing have led to users’ improved ability to protect themselves against such attacks. Participants were shown aseries of websites and asked to identify the phishing websites. We use eye tracking to obtain objective quantitative dataon which visual cues draw users’ attention as they determine the legitimacy of websites. Our results show that userssuccessfully detected only 53% of phishing websites even when primed to identify them and that they generally spendvery little time gazing at security indicators compared to website content when making assessments. However, wefound that gaze time on browser chrome elements does correlate to increased ability to detect phishing. Interestingly,users’ general technical proficiency does not correlate with improved detection scores.

Keywords: phishing, eye tracking, usable security, user study

1. Introduction

An important aspect of online security is to protectusers from fraudulent websites and phishing attacks.Phishing is a “criminal mechanism employing both so-cial engineering and technical subterfuge to steal con-sumers’ personal identity data and financial account cre-dentials” [2]. While advances in the automated detec-tion of phishing websites have resulted in improved se-curity, these automated means are not fool-proof andusers must be vigilant in protecting themselves in thisarms race [21]. According to the Anti-Phishing Work-ing Group, phishing attacks remain widespread: 42,890unique phishing websites were reported in December2013, with the financial and online payments sectors ac-counting for nearly 80% of targeted industries [2].

Modern web browsers provide tools to assist usersin making informed security decisions. For example,

∗Sonia Chiasson is corresponding author.PREPRINT. Final version: doi:10.1016/j.ijhcs.2015.05.005Citation: Mohamed Alsharnouby, Furkan Alaca, and Sonia Chiasson.Why phishing still works: User strategies for combating phishing at-tacks. International Journal of Human-Computer Studies, 82:69-82,2015

Email address: [email protected],[email protected], [email protected]

(Mohamed Alsharnouby, Furkan Alaca, Sonia Chiasson)

visual indicators within the URL bar and the SSL pad-lock have been designed to allow users to judge the le-gitimacy of websites. Unfortunately, these indicatorshave been only partially successful at helping to pre-vent phishing. Poor usability may allow phishing web-sites to masquerade as legitimate websites and deceiveusers into divulging their personal information. Earlierbrowser security indicators have been shown in previ-ous studies to be ineffective, putting users at a higherrisk of falling victim to phishing attacks [45, 32, 15].

This is compounded by the fact that security is a sec-ondary task for most users [47]. Users who are concen-trating on the real purpose of their online interaction,such as making a purchase, are unlikely to notice secu-rity indicators. Furthermore, some security indicatorsare visible only when the website is secure. The ab-sence of a security indicator, as is possible with phish-ing websites, is even less likely to be noticed by users.Therefore, developing usable browser security cues tocombat phishing attacks remains an important and un-solved problem in usable security, as is understandinghow users make determinations about the legitimacy ofwebsites [38].

To inform the design of improved techniques againstphishing, we explored the strategies employed by usersto identify phishing attacks. We showed participants a

Preprint submitted to International Journal of Human-Computer Studies June 23, 2015

Page 2: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

series of websites and asked them to identify whethereach one is legitimate or fraudulent. This paper makesseveral distinct contributions to the literature. First, weevaluate the effectiveness of recent changes that havebeen made in web browser designs to help users identifyfraudulent websites. Secondly, we assess whether usershave developed improved detection strategies and men-tal models of phishing nearly a decade after Dhamijaet al. [13]’s initial phishing study. And finally, we arethe first to use eye tracking data to obtain quantitative in-formation on which visual security indicators draw themost attention from users as they determine the legit-imacy of websites. Based on our results, we identifyaspects in which web browser security indicators haveimproved in modern web browsers, identify areas forpotential improvement, and make recommendations forfuture designs.

The remainder of this paper is organized as follows:Section 2 reviews related work on phishing detectionand tools to aid users in identifying phishing websites.Section 3 details our study methodology. Section 4 pro-vides analysis and interpretation of our quantitative andqualitative data. Section 5 discusses some ideas for fu-ture web browser designs, while Section 6 concludes thepaper.

2. Related Work

Research on protecting users against phishing attackshas taken four complementary approaches: automatingphishing detection, providing user interface cues to helpusers detect phishing, educating users about how to pro-tect themselves, and understanding users’ susceptibilityto phishing to inform the design of protection mecha-nisms. Our work falls within scope of the fourth area,but we also provide a brief overview of the other areasto give context to our work. For a general introduction,see Hong [21]’s article, or for a more complete recentreview of the phishing literature, see Purkait [38]’s lit-erature survey.

2.1. Automated phishing detection

The first line of defense against phishing should beautomated detection; users cannot fall for phishing at-tacks if they never see the attacks. Automatic phishingdetectors exist at several different levels: mail serversand clients, internet service providers, and web browsertools. Tools may block access to a detected phishingwebsite and/or request that the website’s internet ser-vice provider take down the website [34].

Automatic email classification tools commonly usemachine learning techniques [16], statistical classi-fiers [8], and spam filtering techniques [11] to iden-tify potential phishing messages with varying degreesof effectiveness as the threat continues to evolve. Mis-classifications affect the perceived reliability of the ser-vice and users are likely to be quite intolerant to “los-ing” legitimate messages.

Techniques to detect phishing websites include black-lists, machine learning [46], URL feature classifica-tion and domain name analysis, visual similarity assess-ment [17], contextual analysis and user behavioural pre-diction [29], and crowdsourcing [37]. Some blacklists,such as Google’s [46], use automated machine learn-ing. PhishTank [37] offers a blacklist for use by othertools through an API. Its blacklist is populated throughcrowdsourcing volunteers who submit potential phish-ing websites and vote on the legitimacy of websites.

Web browsers maintain their own blacklists andheuristics for detecting phishing, displaying warningsto users if they reach a known phishing page. Detectionrates have improved considerably over the last 5 years.NSS Labs [36] conducts independent tests and foundthat the major browsers had an average phishing detec-tion rate of approximately 90%, with zero-hour blockrates above 70%. Third-party add-ons are also available.Sheng et al. [42] evaluated the effectiveness of eight dif-ferent browser tools and found them generally slow atdetecting new phishing campaigns. This is problematicgiven that the median lifetime of a phishing campaignis about 12 hours [36], with many as short as 2 hours.

While successful at stopping a large number of at-tacks from reaching users, automated methods are in-sufficient as the sole means of protecting users. Sec-ondary methods involving users are necessary for timeswhen automatic detection fails.

2.2. Security indicatorsThere have been a number of studies regarding phish-

ing and the usability of browser security cues. Herzberg[20] provides an overview of several studies.

At its core, phishing is a threat because users areunable to verify the authenticity of the website ask-ing for their credentials. Dhamija and Tygar [12] firstproposed Dynamic Security Skins, a browser extensionthat allows websites to display a secret image and cus-tomizes the browser chrome. Variations of this secretimage method have now been deployed by banks andmajor organizations (e.g., Sitekey [7] and Yahoo Sign-in Seals [51]). Anecdotal evidence suggests that someusers may still fall victim to phishing websites whoclaim that the image database is down for maintenance

2

Page 3: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

or who simply leave out this feature since the absenceof a cue may not trigger attention. Many browser tool-bars (e.g., [10, 52, 31, 24, 25]) have also been pro-posed to protect against phishing, each with limited suc-cess. User studies by Wu et al. [50], Li and Helenius[31], Li et al. [30] found that security toolbars intendedto prevent phishing attacks were ineffective and identi-fied several usability problems. While users may occa-sionally pay attention to the indicators, accomplishingtheir primary task often gets prioritized, and in thesecases users look for visual signs reinforcing the web-site’s trustworthiness rather than heeding warnings tothe contrary [25]. Abbasi et al. [1] compared users’ability to detect phishing given high- or low-performingbrowser toolbars and found that users were more suc-cessful with the high-performing toolbar. However,users still ignored the toolbar’s advice 15% of the time,instead believing that their own intuition was more ac-curate.

Others have explored the browsers’ built-in securityindicators. Lin et al. [32] examined the effectivenessof domain highlighting that is now included in mostbrowsers. They found it to be only marginally suc-cessful when users’ attention was explicitly drawn tothe address bar. Egelman [15] explored various onlinetrust indicators, including web browser phishing warn-ings and SSL warnings. They found that 97% of userswere fooled by at least one attack, but that active warn-ings which interrupt users’ tasks were more effectivethan passive warnings.

Although addressing a tangential issue, passwordmanagers [52, 39] can offer protection against phishingby storing both the user’s credentials and the legitimateURL at which these credentials should be used. Usersattempting to use their password manager at a phish-ing website will either be warned against a suspiciouswebsite or the password manager will supply incorrectcredentials.

Efforts to reduce phishing at the email level are alsopopular, but these typically require minimal user in-volvement beyond needing to occasionally check spam-filtered mail and potentially update spam filters. Emailencryption and digital signing can help protect usersagainst phishing and other attacks, but these are plaguedwith usability issues and are not widely used [18].

2.3. Anti-phishing education

Although educational efforts are unlikely to solve thephishing problem on its own, vigilant users form an im-portant part of the defensive strategy. Both researchefforts and public education campaigns (e.g., [3, 19])

have focused on teaching users how to protect them-selves against phishing attacks. PhishGuru [27, 26, 28]embeds phishing education within the primary task ofreceiving phishing email and results show that the ed-ucational material is most impactful if delivered imme-diately after users have fallen for a phishing attack, amethod now deployed on a large scale by the Anti-Phishing Working Group [4]’s landing page.

Sheng et al. [41] developed Anti-Phishing Phil, aweb-based game to teach about phishing attacks, andshow that users who played the game were better ableto identify phishing websites immediately after playingthe game and one week later. Some independent evi-dence of its effectiveness is provided by Mayhorn andNyeste [33]. A mobile version of the game was alsodeveloped [5].

In attempts to formalize the educational process,Arachchilage and Love [6] are working towards a gamedesign framework based on user motivations. Further-more, Burns et al. [9] propose an intervention model de-scribing the most effective types of interventions basedon users’ stage of knowledge.

2.4. Understanding user behaviour

A significant assumption by attackers is that they willbe able to deceive users through websites with visualcharacteristics sufficiently believable to be accepted aslegitimate. As discussed below, early studies showedthat users were trusting websites based on quick visualassessments that did not necessarily focus on the mostreliable indicators. Some research has focused on char-acteristics of phishing attacks that have increased likeli-hood of success while other research seeks to determinecharacteristics of users that place them at increased risk.It is generally agreed, however, that users are poor at de-tecting phishing attacks. Users’ susceptibility to phish-ing has been explored using several methods, includinglab (e.g., [23, 13, 45]) and field studies (e.g., [49, 22]),surveys (e.g., [48, 44, 14]), and Mechanical Turk studies(e.g, [40]).

As it is the gateway for many phishing attacks, sev-eral studies have explored users’ likelihood of fallingfor phishing emails. Personalization of email con-tent, urgency cues, and email load all contribute toincrease susceptibility, as does low technical exper-tise [44]. Jagatic et al. [22] offer one of the earliestinvestigations of user behaviour with respect to phish-ing. Their field study simulated a targeted phishing at-tack against unsuspecting university students who re-ceived email apparently from an acquaintance. Resultsshow that users were significantly more likely to fall

3

Page 4: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

for targeted attacks than generic phishing scams. Vish-wanath et al. [44] completed a survey of intended vic-tims who had recently been targets of two real emailphishing campaigns on a university campus. Decisionsabout phishing were driven by users’ motivation, be-liefs, prior knowledge and experiences. They further ar-gue that creating habitual rituals of safer behaviour maybe more successful than encouraging constant vigilanceand alertness.

On the other hand, Downs et al. [14]’s survey studyfound that technical knowledge of the web environmentled to increased resistance against phishing and sug-gested education on how to interpret browser cues asa preventative technique. Similarly, Wright and Marett[49] conducted a field study where university studentswere sent phishing email purportedly from a system ad-ministrator and found that an increased level of webexperience and security awareness led users to suc-cessfully detect the attack. In a role-playing scenario,Sheng et al. [40] asked MTurk workers to suggest theirlikely course of action in response to screenshots ofemail messages. They evaluated phishing susceptibilityagainst demographic characteristics and found that priorexposure to educational material and a general aversionto risk led people to better detect phishing attempts.

Other studies have explored users’ responses tophishing within the web browser. In 2006, Dhamijaet al. [13] conducted a lab-based study where partic-ipants were asked to assess the legitimacy of a seriesof websites. Participants were primed on the purposeof the task and this was clearly a “best-case scenario”which tested users’ ability to detect phishing rather thanthe users’ usual practice when encountering websites.Regardless, 42% of the websites were incorrectly clas-sified by users. Using self-reports and observation, itwas determined that 59% of users relied solely on thewebpage content and the URL to assess legitimacy, ig-noring any security cues provided by the browser. In2007, Jakobsson et al. [23] also asked users to assessthe legitimacy of emails and websites in a lab envi-ronment. Users reported relying on the content of theemails and websites, being suspicious when too muchemphasis was placed on security, and trusting signs ofpersonalization or familiar third-party endorsements.

Phishing is now a commonly known attack, discussedin mass media, and most users are familiar with the risk.Have users become more savvy as a result of this famil-iarity? Are they more capable of protecting themselvesthan they were a decade prior? We have followed a sim-ilar methodology to that used by Dhamija et al. [13] intheir study, but have collected eye tracking data to sup-plement participants’ self-reported data specifically as

they were assessing likelihood of phishing. Moreover,since there have been a number of design changes toweb browser interfaces in recent years aimed at increas-ing security, our study examines whether these havelead to improved phishing detection by users.

2.5. Eye tracking in phishing studies

To our knowledge, only two related studies have usedthe eye tracker as a data collection method although nei-ther explicitly looked at phishing. Sobey et al. [43] usedan eye tracker to compare their proposed Extended Val-idation Certificate interface with Firefox’s existing in-terface. They found that the eye tracker data confirmedusers’ reported experiences. Users who reported view-ing the indicators did gaze at them, but the majority ofusers’ time was spent gazing at the content of the pagerather than the browser chrome. Whalen and Inkpen[45] explored users’ use of security cues while com-pleting web transactions. Using eye tracking data, theyfound that two thirds of users looked at the SSL lockicon when prompted to be security-conscious but rarelyused other cues on the browser chrome.

3. Methodology

We conducted an eye tracking usability study to in-vestigate on which strategies users rely to determine thelegitimacy of websites. The study’s methodology wasapproved by the Carleton Research Ethics Board.

3.1. Overview of Study

Our tests were conducted on a standard WindowsXP desktop computer equipped with a Tobii 1750 eyetracker. The eye tracker was used to record the partic-ipants’ gaze information while viewing the entirety ofthe websites. The only software which the participantsused during the session was a web browser maximizedto full-screen. We built a web-based interface, as seenin Figure 1, to allow participants to navigate easily be-tween the test websites and to launch them one at a time.When designing the web interface, we took the follow-ing precautions to reduce possible sources of bias: (1)we randomized the order of the websites for each par-ticipant, and (2) we showed the websites one at a timeto hide the fact that some websites appeared more thanonce. We presented participants with a total of 24 web-sites: 10 were legitimate and 14 were phishing websites.The participants were asked to determine whether eachwebsite was legitimate or fraudulent, and asked to ex-plain how they arrived at their decision. They were also

4

Page 5: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

Figure 1: Web interface where users navigate between test websites.

asked to rate their level of certainty in their decision ona scale from 1 (not at all certain) to 5 (very certain).

Each session lasted approximately one hour, with twoexperimenters present. One experimenter was respon-sible for the technical aspects of the experiment (e.g.,to assist the participant if they closed the main browserwindow by accident, or to remind the participant to ad-just their posture if the eye tracker could no longer de-tect their eyes) and for asking the appropriate questionsfor each website (e.g., “Do you think that was a legit-imate website or a phishing website? How confidentare you in your decision?”). The other experimenterwas responsible for recording the participant’s answersand noting down real-time observations from the eyetracker when possible. After the participant had viewedall of the test websites, they were asked questions ina semi-structured post-session interview. Each sessionwas audio-recorded to allow missed details to be tran-scribed at a later time. At the end of the interview, weprovided participants with $15 and a debriefing formwith our contact information. We also informed themabout how well they performed, and explained how theycan use browser security cues to protect themselves. Wenote that this setup represents a best case scenario whereparticipants are primed to detect attacks, representing anupper bound on their ability in real life rather than theirusual habits.

Online habitsCategory Characteristic Percentage

Primary Chrome 52Browser Firefox 28

Internet Explorer 10Safari 10

Online Use Banking 100Shopping 86

Operating Windows 86System Mac OS 14

Online Account break-in 19Attack Credit card fraud 14

Identity theft 0

Proficiency Designed website 57Changed firewall settings 43Installed OS 38Registered domain name 10Used telnet/SSH 10

Website Facebook 86Amazon 76cuLearn 72Kijiji 67Paypal 38Ebay 38Twitter 33Instagram 14

Demographics

Sex Male 9Female 12

Age (mean) 27

Field Technical 4Non-Technical 17

Table 1: Demographics and online habits of our participants

3.2. Participants

The 21 participants (12 female, 9 male) were primar-ily recruited from Carleton University through postersand e-mail. A pre-session questionnaire was emailedto respondents to collect demographic information,web browser preferences, and data on technical pro-ficiency. Portions of this questionnaire were adaptedfrom Egelman [15]’s questionnaire assessing technicalproficiency and a value was assigned to each of themwith zero being the lowest and five being the highest

5

Page 6: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

technical proficiency. Participants’ ages ranged from18 to 51; the mean age was 27 (σ = 10.19). Four-teen participants were students (undergrad or grad) andseven were professionals. Participants had a wide rangeof backgrounds; four participants were in engineeringfields (electronics, biomedical, physics and electrical)and seventeen were in non-technical fields. None hadany specialization in computer science or computer en-gineering.

Additional information about our participants’ onlinehabits is summarized in Table 1. The majority of par-ticipants used either Chrome or Firefox as their primarybrowser; 85% of our participants reported also using asecondary browser. All participants reported using on-line banking, and most do shopping online. Few of ourparticipants have knowingly been victims of online at-tacks. Their technical proficiency appears in line witheveryday computer users; many have designed a web-site and adjusted firewall settings, but few have usedtelnet or registered a domain name. Technical profi-ciency scores ranged from 0 to 5 with a mean score of1.6 (σ = 1.47).

This questionnaire was completed ahead of the labsessions and we used it to tweak the test websites usedin the study. For example, most of our participants werenot users of Instagram, so we excluded it from the study.

3.3. TasksThe participants received verbal instructions during

the test. We briefly explained that phishing is typi-cally characterized by a fraudulent website which mim-ics a legitimate website in an attempt to trick users intorevealing personal information such as passwords orbanking information. We asked participants to imaginethat they had visited each of the 24 websites through anemail message or some other link. Their task was to de-cide whether each was a legitimate website or a phishingattempt, and explain the reasons behind their decision.If we noticed participants trying to enter their own cre-dentials into any of the websites, we advised against itand told them to enter fake information if they wished.We did not log any credentials entered. This scenario issimilar to the original study by Dhamija et al. [13].

3.4. Experimental setupWe tailored our websites to ensure that several would

be familiar to participants. We included the CarletonUniversity website, major Canadian banks, and a num-ber of websites from the Alexa top 500 list in Canada.1

1http://www.alexa.com/topsites/countries/CA

The complete list of websites used in the experiment isavailable in Table 2.

All of the phishing websites were hosted from anApache web server running on a laptop. We also set upa certificate authority on our laptop, and used it to issuecertificates for some of our phishing websites. In fact,in order to show the ease with which it is possible toobtain a domain-validation certificate from a reputableprovider without extensive verification, we purchasedwww.cartelon.ca as well as a corresponding SSL cer-tificate (both from GoDaddy.com). To prepare the desk-top computer used for our experiment, we used our owncertificate authority by adding its certificate to the webbrowser and modified the hosts file to route requests tothe phishing websites to the laptop’s web server.

3.4.1. Web browser selectionAt the time of the study, there were four ma-

jor web browsers: Microsoft Internet Explorer 10,Mozilla Firefox version 10.0.12, Google Chrome ver-sion 26.0.1410.43, and Apple Safari version 6.0.3. Wechose to use Google Chrome. Since our eye trackercould only run on Windows XP, it was technically infea-sible to create a remote desktop connection to an AppleOS X based machine from the eye tracker; network la-tency would have caused delays and undesirable expe-riences for our participants. We also ruled out InternetExplorer, due to its apparent unpopularity among mem-bers of the Carleton University community. To choosebetween Firefox and Chrome, we performed a user in-terface comparison and studied the differences betweenhow they display visual security indicators (althoughwe had already ruled out Internet Explorer, we have in-cluded it in our comparison because of its high marketshare).

SSL Lock. We found that while Chrome uses a vividgreen SSL lock with well-defined edges, Firefox and In-ternet Explorer both use a dull grey lock which couldeasily go unnoticed. As can be seen in Figure 2, Firefoxand Chrome both display the SSL lock on its own onthe left-hand side of the URL box, whereas Internet Ex-plorer displays it on the right-hand side of the URL box,buried between a number of other icons which appear inthe same shade of grey (strangely, however, the InternetExplorer SSL lock is more prominent in the WindowsXP theme as compared to the more recent Windows 7theme).

Https and URL. Chrome displays the https portionof the URL in green, whereas Firefox displays it in greyand Internet Explorer displays it in black. All threebrowsers display the domain name in black text (but

6

Page 7: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

Type Website Description

Phis

hing

1 Scotiabank (B) Real website in iframe, overlaid by another iframe with maliciouslogin link.

1 TD (B) Real website in iframe, overlaid by another iframe with maliciouslogin link.

1 RBC (B) Home and login pages replicated, all other links point to real website.1 CIBC (B) Home and login pages replicated, all other links point to real website.1 HSBC (B) Home and login pages replicated, all other links point to real website.1 cuLearn (U) Mistyped URL (cartelon.ca instead of carleton.ca).1 Carleton Portal (U) Mistyped URL (cartelon.ca instead of carleton.ca).1 Paypal (Favicon) Favicon set to green padlock.1 Gov. of Canada (theft) Sign-up page for identity theft protection. Asks for Social Insurance

Number.2 Paypal (IP) IP Address as URL.3 Amazon Browser chrome replicated in page contents to spoof a new window.4 Carleton University (U) Legitimate home page with fraudulent popup asking for login cre-

dentials.5 Facebook (popup) Legitimate website overlaid with a phishing pop-up window.6 Credit Card Checker Asks for credit card details to check if card has been compromised.

Leg

itim

ate

7 Research Survey (U) Legitimate research survey page with self-signed certificate.7 Gov. of Canada Legitimate, non-SSL.7 Facebook Legitimate, with SSL.7 HSBC (legit) (B) Legitimate, with EV SSL certificate.7 LinkedIn (non-SSL) Legitimate, non-SSL.7 LinkedIn (SSL) Legitimate, with SSL.7 Pinterest Legitimate, non-SSL.7 Kijiji Legitimate, non-SSL.7 Netflix Legitimate, non-SSL.7 Twitter Legitimate, with EV SSL certificate.

Table 2: Websites shown to participants, with corresponding phishing techniques. (B) = bank website, (U) = university website. Types are describedin Section 3.4.2.

Chrome also includes the subdomains) and the remain-der of the URL in grey text.

EV Certificates. We compared SSL indicators forextended validation (EV) certificates, and found thatthey are virtually identical in Chrome and Firefox; bothdisplay a prominent green rectangle containing the com-pany name in the left-hand side of the URL box. Inter-net Explorer turns the entire URL bar green and placesthe company name on the right-hand side of the URLbox.

Certificate warnings. Regarding SSL certificatewarnings, we found that Firefox does not allow the userto proceed to the website without adding a permanentexception for that website, and never again presents anywarning indicators for subsequent visits. Chrome andInternet Explorer, however, do not prompt the user toadd an exception to proceed. Even after proceeding,

Chrome overlays a red cross over the https portion ofthe URL and Internet Explorer turns the entire URLbar red. Moreover, we found that Chrome’s certificatewarning page, which has a deep red background, seemsmuch more intimidating than in the other two browsers.

Favicon. It was reported by Dhamija et al. [13] thatmany users were easily deceived when phishing web-sites set their favicon to be a padlock. This problemappears to be taken into account by Chrome and Fire-fox, which have both moved the favicon from the URLbox to the tab corresponding to the page, thereby sep-arating the locations of the favicon and SSL padlock.However, Internet Explorer still places the favicon onthe left-hand side of the URL bar, and it is much moreprominent than the dull-grey SSL lock on the right-handside of the URL.

Overall summary. We believe that Chrome and Fire-

7

Page 8: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

fox do a better job of locating the SSL indicators, sincethe https, lock icon, and EV certificate are displayedcleanly on the left-hand side before the URL. Chrome,however, makes more extensive use of colour. A finaldifference that we noticed regarding the URL bar withChrome is that it uses a visibly larger font when com-pared to Firefox and Internet Explorer. We also com-pared the layout of user interface in Chrome with that ofFirefox and found them to be virtually identical. Giventhe nature of our experiment, where we specifically in-structed our participants to identify phishing websites,we decided that it would be consistent with our objec-tives to pick the browser with the most prominent secu-rity indicators, which we believe is Chrome.

3.4.2. Phishing techniques and categories of attacksThe following is an overview of the general cate-

gories of websites which we tested in our study. In aneffort to test for all possible web browser security cues,we employed a number of common techniques such asincorrect and typejacked URLs, overlaid iframes andpopups, missing SSL certificates, and SSL certificateerrors. Table 2 summarizes which test website fell intoeach category.

Type 1) Spoof websites with incorrect URL: Forphishing websites in this category, we downloaded thelegitimate website to our laptop computer using HT-Track2. We then modified the code as necessary toensure that the main page would load properly whenserved by our web server, and that all links on the mainpage led to the legitimate website. Therefore, partic-ipants paying attention to the URL bar would noticethat clicking on any link from the main page causesthe domain name of the website to change. In somecases, even the SSL indicator would change (e.g., thewebsite would change from having no SSL to havingSSL, or would change from having a domain-validationcertificate to an EV certificate). Some websites in thiscategory had mistyped domains such as cartelon.ca(instead of carleton.ca), whereas others had com-pletely erroneous domains such as scotiabank.secure-encrypt05.com.

For some websites, we created a page which displaysthe legitimate website in an iframe that covers the en-tire width of the browser window. We then overlaid thewebsite with iframes on top of the login button(s), toredirect the user to our spoofed login page. Since the le-gitimate page is displayed in an iframe, it is guaranteedthat the user will never see an outdated page. Secondly,

2An open source web crawler: http://www.httrack.com/

the URL bar does not change as the user clicks on differ-ent links. However, if they visit a page which does notnormally contain a login button, an observant partici-pant should notice that our iframe with the login buttonstill appears when it should not. Interestingly, we no-ticed that some websites (e.g., Gmail) made use of theX-Frame-Options header forbidding the web browserfrom displaying it inside of a frame, whereas the ma-jority of Canadian banks did not take advantage of thisfeature.

Lastly, we set the favicon to the green SSL lock on thePaypal website to see if that would deceive participantsinto thinking that they were on a secure website.

Type 2) Spoof websites with IP address as URL:Another classic phishing technique is to use an IP ad-dress instead of a domain. We suspected that userswould more easily notice this type of attack, and there-fore chose to use this technique for only one website(Paypal).

A technique which previously was commonly usedin conjunction with this category (and also the pre-vious category) of spoofs was to embed a long username and password before the URL in the form ofhttp://username:password@domain. Phishing URLswould take advantage of this format by filling the user-name segment with a string long enough to displace theactual domain or IP address, and beginning with whatappears to be a legitimate URL. However, modern webbrowsers do not display the user name and password inthe URL bar, rendering this type of attack obsolete.

Type 3) Fake chrome: There have been phishingattacks which attempt to spoof the browser chrome3.In the past, when browsers supported toolbarless pop-ups, it was possible to reproduce a remarkably real-istic spoof of the browser chrome complete with theURL bar and navigation buttons. However, modern webbrowsers display at least the URL bar in all popups,which makes it more difficult for phishers to spoof thebrowser chrome in a non-obvious way. For this cate-gory, we opted for a relatively primitive attack whichwe hoped that most participants would catch. We con-structed a page with two horizontal frames: the topframe displayed an image of a browser chrome with theAmazon Canada URL, complete with the green SSLlock, and the bottom frame displayed our fake Ama-zon login page. The real chrome showed a fake URL(http://secure-signin amazon.ca) with no SSL lock.

Type 4) Popups asking for credentials: For thiscategory, we simulated rogue popups which appear over

3The borders of the browser, including the menus, toolbars, andbuttons

8

Page 9: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

Figure 2: Comparison between browser chromes (Top to bottom: Mozilla Firefox, Google Chrome, Internet Explorer 10 with Windows XP theme,Internet Explorer 10 with Windows 7 theme).

the legitimate website and ask for the user’s credentials.As mentioned above, since modern web browsers al-ways include a URL bar in popup windows, these at-tacks may be less effective than they were previously.We designed a popup window for the Carleton Univer-sity website which prompts the student for their username and password to log in.

Type 5) Overlaid popup windows: We also triedanother type of popup attack, which overlays a roguewebsite over the entire contents of the legitimate web-site, while leaving the chrome and URL bar visible. Weagain predicted that the URL bar on the popup windowwould make it easier for participants to detect this typeof attack. We used Facebook for this attack, since wewanted to see if users would detect this relatively largeinconsistency on a website that they use often.

Type 6) Fraudulent based on context: We includedone website which did not attempt to masquerade as anyother organization, but instead was fraudulent based oncontext. We constructed a page which claimed to checkif a user’s credit card has been stolen.4 We includeda “Verified by Visa” logo in order to make the websiteappear more credible. We included this website with theexpectation that all participants would recognize it as aphishing attempt.

Type 7) Legitimate websites: While Dhamija et al.[13]tested legitimate websites which looked suspicious,such as a legitimate website hosted by a third-party ser-vice or mobile websites with no graphics, we omittedsuch tests for two reasons: (1) due to the difficulty offinding such a legitimate website which would be of any

4Based on http://www.ismycreditcardstolen.com/

relevance to our participant base, and (2) due to our be-lief that people should in fact be suspicious when theysee such cues, particularly in cases where the website ishosted by a third-party service. Instead, we used popu-lar websites which should be obviously legitimate basedon cues from both the browser chrome and content. Onevariable which we did include in the legitimate categorywas to test both the http and https versions of LinkedIn,to see if participants would distinguish between the twoin the controlled setting of our lab study.

Given the ease with which an SSL certificate canbe obtained for a fraudulent website, we believe thatit is possible for the lock icon to give a false sense ofsecurity to the users. For this reason, we generatedSSL certificates for a number of our phishing websites.We suspected that if none of the phishing websites hadSSL locks, some participants may have found that to beenough of a reason to declare the website as a phishingattempt, without bothering to read the URL bar. There-fore, by generating an SSL certificate for domains suchas cartelon.ca, we aimed to eliminate all possible phish-ing indicators other than the mistyped domain name.

3.5. Data collection and analysis protocols

We collected both qualitative and quantitative datausing several methods. We recorded users’ self-reporteddecision about whether a website was fraudulent or le-gitimate and comments on why they reached this deci-sion. Participants also rated their certainty level witheach decision on a scale of 1 (not at all certain of theirchoice) to 5 (completely certain of their choice). Werecorded eye tracking data as they interacted with eachwebsite and the time it took for participants to judge

9

Page 10: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

the legitimacy of each website. We held a short post-session semi-structured interview to understand partic-ipants’ knowledge and experiences with phishing andtheir knowledge of browser security cues. During theinterview, we prompted participants about whether theyknew the meaning of specific indicators (e.g., SSL lock,https) and why they had made certain decisions duringthe test.

3.5.1. Interview protocolThe interview revolved around the following themes:

• Knowledge and experience with phishing: e.g.,whether or not they have heard of phishing be-fore, if they understand what it means, if they recallever having encountered phishing attempts, and thestrategies that they employ to identify phishing at-tempts.

• Knowledge of browser security cues: e.g., check-ing the address bar for SSL lock icon, correct URL,and https.

• Knowledge of certificates and SSL and their indi-cators: e.g., whether they understand the purposeand meaning of certificate warnings presented bythe browser and the glowing green bar shown forEV certificates.

We recorded each session with a digital audiorecorder and transcribed them for our analysis. We be-lieve that conducting an interview-style session wherewe asked users to verbally explain their choices and an-swer our questions allowed us to collect more valuabledata in comparison to a written or electronic question-naire, where it would take more effort for participants togive detailed answers.

3.5.2. Eye tracking analysisParticipants’ interactions with the computer were

recorded using the eye tracker. Multiple steps weretaken to prepare the eye tracking data for analysis. Foreach participant, we divided the recording of the entireeye tracking session into separate recordings per web-site. This eliminated the eye tracking data recorded dur-ing the use of the main experiment interface (Figure 1)and also allowed us to filter actions that are consideredirrelevant to our analysis.

To associate the raw eye tracker data with specific ar-eas on the screen, we defined Areas of Interest (AOIs)for each website using the analysis software bundledwith the eye tracker, as shown in Figure 3. The softwaregenerated data specific to the AOI such as the timestamp

and total dwell time for that area. AOIs were groupedinto two main categories: browser chrome and webpagecontent. The browser chrome category included areassuch as the lock icon within the chrome (Figure 3, LabelA), the https/http section of the URL address bar (Figure3, Label B), the address bar (Figure 3, Label C), and thebottom section of the chrome (Figure 3, Label E) whereinformation such as hyperlinks are displayed. The con-tent category included only one area of interest: the con-tent of the pages displayed within the chrome (Figure 3,Label D). Since some AOIs overlapped, for example thelock icon AOI and the address bar AOI, care was takenwhen the data was being prepared to subtract these val-ues accordingly.

The eye tracking data was imported into a MySQLdatabase, and SQL procedures were written to formatthe data into a more usable form. This intermediate datawas further analyzed and combined with data from theinterview such as participants’ certainty levels and deci-sions to produce the final results.

4. Results

We report on participants’ success at identifyingfraudulent websites, the time they spent looking at thedifferent aspects of the screen, and their reasoning formaking decisions. Where appropriate, we conductedstatistical analysis to see whether demographic charac-teristics affected results. We used independent sample t-tests and Pearson’s Moment Product Correlation Coeffi-cient (i.e., Pearson’s r) as appropriate to assess whethercertain participant characteristics affected their perfor-mance scores. In all cases, a p < 0.05 was consideredstatistically significant.

4.1. Performance ScoresWe assigned each participant a score, which we de-

fined as the sum of the number of websites which theyidentified correctly as either legitimate or phishing web-sites. The scores ranged from 9 to 22, out of a total of24 websites (µ = 15.28, σ = 3.97). The performanceresults are summarized in Table 3. The table includesthe success rate for each website (i.e., the percentage ofparticipants who correctly identified whether the web-site was legitimate or fraudulent) and the average timeit took participants to make a decision, broken downinto time spent on AOIs and total decision time. It alsoincludes the number of participants who thought eachwebsite was “legitimate” or “phishing” and the meancertainty level for each type of decision.

10

Page 11: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

Figure 3: Areas of interest (AOIs) for eye tracking.

Participants took 87 seconds on average to make a de-cision whether the website is real or fake, with 48 sec-onds of that time spent specifically on the chrome andcontent AOIs. Levene’s test to assess the homogeneityof variance was passed (F = 0.003, p = 0.96). T-testresults showed that there was no statistically significantdifference in the mean decision time between legitimateand fraudulent websites (t = −0.13, p = 0.90, α = 0.05two-tailed) with the mean time taken for the phishingwebsites (87 seconds) virtually identical to that of legit-imate websites (88 seconds). The average success rateis 53% for the phishing websites and 79% for the le-gitimate websites, indicating that participants were rel-atively poor at detecting fraudulent websites even whenprimed to look for signs of deception. Participants wereconfident in their decisions, with a mean certainty levelof 4.25, regardless of whether they had made the correctchoice or not.

We next examined whether demographic characteris-tics may have influenced results and found no signifi-cant effects.

Gender: The mean performance score was 14.0 formales (σ = 4.21) and 16.2 for females (σ = 3.67). Thehomogeneity of variance between the two groups was

confirmed using Levene’s test (F = 0.54, p = 0.47).Once more the t-test showed no statistical significancebetween the scores for the male and female groups (t =

−1.305, p = 0.21, α = 0.05, two-tailed).Age: A Pearson’s r was conducted to analyze the

correlation between participants’ ages and their scores.No statistical significance was found (N = 21, r =

−0.24, p = 0.30, α = 0.05, two-tailed).Technical expertise: A technical proficiency score

out of 5 was calculated for each participant, based onthe number of “yes” responses given to the pre-test pro-ficiency questions from Table 1. No statistically sig-nificant correlation was found between the participants’proficiency score and their performance score (N = 21,r = 0.19, p = 0.40, α = 0.05, two-tailed).

We also informally explored the performance of thefour participants with engineering background to see ifgeneral technical experience impacted scores. We foundno relationship. These four participants placed 5th, 10th,16th, and 21st out of 21 participants with respect to theirperformance scores.

Familiarity with websites: Although all partic-ipants reported using online banking at least once amonth, 52% of the participants failed to recognize

11

Page 12: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

Success AOI Total “Legitimate” “Phishing”Rate Time Time Answers Answers

Type Website (%) (s) (s) Num Certainty Num Certainty

Phis

hing

1 CIBC 33 42 73 14 4.71 7 4.294 Carleton University 38 53 89 13 4.62 8 4.131 cuLearn 38 55 89 13 4.62 8 3.882 Paypal (IP) 38 42 77 13 3.85 8 4.251 RBC 43 71 109 12 4.25 9 4.111 HSBC 48 47 82 11 3.91 10 3.701 Paypal (Favicon) 48 58 95 11 4.09 10 3.901 Carleton Portal 48 62 105 11 4.82 10 3.101 Scotiabank 52 46 89 10 4.20 11 4.363 Amazon 62 49 90 8 4.50 13 4.385 Facebook (popup) 62 31 61 8 4.38 13 4.621 TD Bank 62 62 94 8 4.25 13 4.461 Gov. of Canada (theft) 71 54 111 6 3.67 15 4.076 Credit Card Checker 95 24 47 1 1.00 20 4.70

Leg

itim

ate

7 Research Survey 14 48 123 3 3.67 18 4.507 Pinterest 67 47 89 14 4.00 7 3.867 Kijiji 81 57 98 17 4.29 4 3.757 LinkedIn (non-SSL) 81 55 96 17 4.06 4 4.257 Gov. of Canada 86 52 97 18 4.50 3 2.677 HSBC (legit) 86 50 83 18 4.44 3 2.677 LinkedIn (SSL) 90 43 85 19 4.37 2 4.507 Netflix 90 39 75 19 4.37 2 4.007 Facebook 95 31 57 20 4.70 1 3.007 Twitter 100 42 73 21 4.33 0 0.00

Table 3: Success rate, mean time spent on all AOIs, mean time to make a decision per website, number of participants who decided a websitewas legitimate (out of 24) and their mean certainty level (out of 5), number of participants who decided a website was phishing (out of 24) andtheir mean certainty level. For phishing websites, success rate (TP - true positive) is the percentage of correctly identified phishing websites. Forlegitimate websites, success rate (TP) is the percentage of correctly identified legitimate websites. TP are identified in green. FN (false negatives)are identified in blue. FP (false positives) are identified in orange.

phishing attempts with their own banking website. Oneparticipant was able to correctly identify all bankingwebsites as phishing websites except for their own bank,whereas others were not able to recognize any of thebanking websites as phishing attempts at all. We foundno relationship between participants’ ability to identifyphishing attempts on their own banking website withtheir ability to identify phishing attempts on unfamiliarbanking websites.

Although this is difficult to measure, we noticed thatsome participants were overconfident with their most fa-miliar websites, such as with their own bank or cuLearn.They were quick to judge and make statements such as“I know this website, it looks exactly like the last time Iused it” and seemed to take less precautions in compar-ison to the rest of the websites.

4.2. Eye tracking Results

Table 4 shows the total percentage of time each par-ticipant spent looking at browser chrome elements ver-sus webpage content, along with the total score of eachparticipant. On average, participants spent 6% of thetime looking at chrome AOIs, 9% of the time on otherchrome elements, and 85% of the time looking at thewebpage contents. A positive significant correlationwas found between the time participants’ spent look-ing at the chrome and performance scores (r = 0.55,p = 0.01, α = 0.01, one-tailed). A positive correla-tion was also found between time specifically spent onthe chrome AOIs and the performance scores (r = 0.40,p = 0.04, α = 0.05, one-tailed). This suggests that moretime spent observing the chrome, and the chrome AOIsin particular, led to an increased ability to correctly as-sess the legitimacy of websites.

12

Page 13: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

Time (percentage) Spent OnParticipant Chrome All WebsiteID AOIs Chrome Content Score

09 4 13 87 919 4 8 92 916 1 4 96 1001 2 6 94 1106 3 7 93 1107 6 12 88 1202 11 18 82 1318 4 14 86 1317 2 4 96 1520 3 5 95 1503 15 25 75 1608 2 8 92 1611 6 18 82 1721 3 16 84 1710 10 20 80 1813 1 5 95 1814 5 32 68 1804 3 16 84 2012 4 11 89 2005 13 43 57 2115 12 23 77 22

Table 4: Percentage of time spent looking at the browser’s chromeAOIs, all chrome elements, and webpage content versus performancescores, ordered by score (out of 24).

Figure 4 shows the time that each participant spentlooking at specific elements in the browser chrome.Each chrome AOI is listed separately and the remain-der of the chrome elements such as the back and for-ward buttons are included in the “other chrome” cate-gory. We exclude the chrome footer (Figure 3, Label E)since Google Chrome does not reserve a space for thefooter and only overlays data on top of the main pagecontent. It was technically infeasible to distinguish eyetracking data in that area from the webpage content area(Figure 3, Label D). When considering only the timespent looking at chrome elements, 2% of the time wasspent looking at the lock icon, 5% was spent looking atthe http/https area, 31% of the time was spent lookingat the address bar, and the remaining time was spent onirrelevant chrome elements.

4.3. Attention to URL

As shown in Figure 4, all participants spent sometime looking at the URL address bar according to the

eye tracking data. Of the chrome AOIs, the address barwas most popular. However, not all users reported usingthe URL in their decision-making process. It is unclearfrom the data whether they glanced at it subconsciously,whether they discounted it when making a decision, orwhether they simply forgot to mention it.

While 14 participants (67%) reported looking at theURL at least once, the degree to which they understoodits contents and relied on it varied significantly. For ex-ample, one participant only got suspicious when theynoticed after visiting a number of links that the mainpage had a different domain name from all the pages thatit linked to. Another participant cited the URL as impor-tant, but wrongly stated that it could be “manipulatedby cookies”, and incorrectly classified both LinkedInwebsites as phishing attempts. Two participants werecomfortable with identifying websites as legitimate ifthey had “simple and clear” URLs such as netflix.comor twitter.com, but on the other hand they wrongly iden-tified the banking websites as legitimate, even thoughthey did not have “simple” URLs. Therefore, it becameclear to us that although the majority of participants ac-knowledged the role of a website’s URL as an impor-tant cue for identifying a website’s legitimacy, they hadmany different interpretations of its meaning and levelof importance.

Our most deceptive URL proved to be cartelon.ca,in place of carleton.ca. In fact, we used the samemisspelling in two tests (cuLearn and Carleton Por-tal). Only three participants (14%) noticed the mis-spelling. However, despite this, cuLearn and CarletonPortal were correctly identified as phishing websitesby 38% of participants. These additional participantslabeled the websites as phishing for completely erro-neous reasons. Two participants stated that the cuLearnURL should have been carleton.ca/culearn instead ofculearn.carleton.ca, and a number of students identi-fied the student portal as fraudulent for reasons such asoutdated content (when in reality, the content was iden-tical to the real website).

We frequently observed that since all of the links onour phishing websites redirected to legitimate websites,many participants incorrectly classified these phishingwebsites as legitimate after having been redirected tothe legitimate website. This re-emphasizes our eyetracking data and observations that many users infre-quently observe the chrome, do not notice changes ininformation located in the chrome such as changingURLs, or do not understand the changes even when theydo observe them.

13

Page 14: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

01

03

05

07

09

11

13

15

17

19

21

Percentage of time spent gazing at each chrome element

Part

icip

ant

ID

lock

http/https

address bar

other chrome

Figure 4: Time spent looking at the lock, http/https, URL bar, and other chrome elements.

4.4. Attention to SSL IndicatorsAccording to the eye tracking data, participants spent

much less time observing SSL indicators such as thelock and https than domains (Figure 4). Granted both ofthese can be observed relatively easily and a quick deci-sion can be made as to whether the indicator is present.

Lock: The eye tracking data shows that 90% of usersat least glanced briefly at the lock icon during the ses-sion, but it is unclear how many actually consciouslynoted its state. Twelve (57%) of our participants re-ferred to the SSL lock at least once when justifying thelegitimacy of a website. However, we observed a widerange of interpretations for the meaning of the lock. In-terestingly, in the previous study by Dhamija et al. [13],only 5 out of 22 participants knew the correct locationof the SSL padlock in the browser.

We generated SSL certificates for all of the bankphishing websites (except Scotiabank, below), andtherefore they all displayed a green padlock in thechrome. Since the legitimate Scotiabank home pagedoes not support https and we included it in an iframefor our attack, this website displayed a grey padlockwith a yellow yield sign indicating that there is unse-cured content on the page. The participant who paid themost attention to the SSL locks incorrectly identified allof the banking websites as legitimate with the exceptionof Scotiabank, on which he cited the yellow yield signas his reason for identifying it as a phishing website.Another participant stated that “the lock just means thatthere is a secure connection, but the server could still beharming you”.

For the single website where we set the favicon to im-itate an SSL lock, only one participant noticed it. Theycommented that the lock was in the wrong location andcited that as a reason for why the website was a phish-ing attempt. None of the other participants noticed the

favicon.Https: The eye tracker recorded 95% of participants

looking at the https identifier. Ten participants (48%) re-ported actively checking for https in the URL and statedthat its existence is a positive security indicator. Oneparticipant mentioned that https in an address bar wouldlead them to trust a website with their personal informa-tion.

When prompted during the post-session interview,three participants reported knowing that “the s in httpsmeans secure” but that they had completely forgottento check for it during the experiment. One participantknew the details of the https protocol implementationand that “https is just the server securing itself. Servercould still be doing something that can harm you”.This participant still completely overlooked this indica-tor and forgot to check during the experiment. Four par-ticipants (19%) specifically reported not knowing whathttps means or what it does.

We found that 80% of participants who reportedknowing the importance of https prepended to a URLdid not truly understand what it implies. When asked,none could explain how https could potentially be usedat both phishing and legitimate websites.

EV certificate highlighting: Four (19%) participantsreferred to the EV certificate at least once. The firstparticipant (the same one who trusted all of the bank-ing websites except for Scotiabank) stated that when hegoes on Paypal he usually sees the company name in agreen box, and he cited the absence of the box as hisreason for correctly identifying the site as a phishing at-tempt. Two participants mentioned the EV certificate asone of their reasons for trusting the legitimate Twitterwebsite. However, in our debriefing sessions, none ofthe participants knew the difference between an EV cer-tificate and a regular certificate until it was explained to

14

Page 15: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

them.Certificate warning: Fifteen (71%) participants (in-

correctly) did not proceed when confronted with a cer-tificate warning when viewing the Carleton ResearchSurvey link. When questioned, one participant statedthat if he was browsing a trusted website such as Car-leton, and he arrived at an SSL certificate warning pagethrough an internal link, he would most likely proceed.On the other hand, if he arrived at the page through anexternal link such as through a search engine or an e-mail from a friend, he would turn back. Another par-ticipant stated that he would contact the person whosent the link. Although not completely secure, these arelikely reasonable strategies given that most of the timeusers will see such warnings for a legitimate page. Theremaining participants stated that if they “knew wherethey were going”, they would proceed, or otherwise turnback. These responses are potentially more problematicbecause it is unclear how users would assess their desti-nation.

4.5. Strategies Employed by Users

We discuss qualitative data representing the generalstrategies that we observed our participants using toidentify phishing attempts. We analyzed our observa-tion notes and the transcripts of the sessions where par-ticipants explained their decision-making process. Wecoded the emerging strategies and grouped them intobroader categories.

Many participants were inconsistent with the way inwhich they applied their strategies. For example, someparticipants commented on the simplicity of a URL astheir reason for trusting one website, but did not raiseany objections for other websites with clear alterationsto the URL.

The strategies are presented in order of popularity,with Strategy A being by far the most common strat-egy. However, given that participants changed strategiesthroughout their sessions and may not have reportedthem each time, it is infeasible to accurately classifyexactly how many participants used each strategy. Fur-thermore participants sometimes combined two distinctstrategies during their assessments.

Strategy A) Website content: Some participantsjudged the websites purely based on content, despiteglancing at the chrome according to the eye trackingdata. For example, they cited the layout and profes-sional appearance of the website to support their de-cision. One participant repeatedly stated “this is whatI would expect a banking website to look like”. Twoparticipants also paid attention to links to Facebook and

Twitter, with one participant saying that since the web-site “has a lot of likes on Facebook, it means that alot of people have been here so it should be trustwor-thy”. These participants explored the website and casu-ally clicked on some of the links to gain a better ideaof the website. Participants who solely relied on thisstrategy were more susceptible to falling for phishingwebsites that closely matched the look of their legiti-mate counterparts. Participants usually reported that thephishing website looks “very familiar” or “just like theone I’ve seen this morning”, and that they saw no alarm-ing difference. Clearly, this approach is relatively naiveand leaves users susceptible to attack. Unfortunately, itwas the most popular strategy.

Strategy B) Brute-forcing website functionality:Some participants tested a number of features on eachwebsite. They were actively trying to find glitchesand hints that the website was broken. For example,they tried functionality such as changing the language,changing locations, trying the mobile version of thewebsite, testing the forms and login fields with false in-formation, clicking on many different links on the mainpage, and searching for contact information. These par-ticipants performed better than we expected. While wepaid great attention to detail when developing the phish-ing websites, there were small details on a number of thetest websites that we missed (e.g., the ability to changelanguages) and these raised suspicion. We suspect thatthe majority of phishers would not spend a greater levelof effort than we did in designing their phishing web-sites, which would likely make this a relatively effectivedefense strategy in real life if users adhered to this prac-tice regularly but this seems unlikely given the amountof effort involved.

Strategy C) Paying attention to the URL: Whilemany participants reported paying attention to the URL,all of them used this strategy in combination with otherstrategies. Participants either tried to recall familiarURLs or used heuristics such as assessing the simplicityof the URL. Those who questioned the URL often men-tioned that the “URL looks weird” or that “it containsrandom stuff”. One participant correctly noticed thatthe domain changed every time she was redirected froma phishing website to a legitimate website by clickingon links and deemed this behavior malicious. However,most participants reported that they did not recognizethe difference between a domain and a sub-domain, andwere unable to identify the important parts of a URL.

Strategy D) Using a search engine: Three (14%)participants used Google to search for the website beingtested, and compared both the URL and page contentwith the search results. Each of these participants also

15

Page 16: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

applied Strategy B, actively trying to find flaws in thewebsites. These participants performed the best, withan average score of 22.3. In fact, two participants couldhave potentially scored even higher, but they did not be-gin using this strategy until the third or fourth website.In practice, users are unlikely to apply this strategy toevery website they visit. They may verify a website ifthey become suspicious, but this relies on users first rec-ognizing questionable websites.

Strategy E) Exclusive reliance on SSL indicators:Two participants reported that they based their decisionsexclusively on SSL indicators. Eye tracking data showsconsiderable time spent on the webpage content as well,so it is unclear whether this subconsciously also influ-enced decisions. These participants missed cues such asthe misspelled URL (cartelon.ca) or the overlaid pop-ups because they assumed that a signed certificate meantthat the website was safe. This strategy highlights mis-understandings that occur when interpreting security in-dicators. It also highlights how phishing attacks cantrick users by intentionally using the exact visual cuesthat are meant to indicate security.

4.6. Challenges and LimitationsWe faced a number of challenges during our study.

Since security in general is a secondary task, it is diffi-cult to create an ecologically valid scenario while col-lecting eye tracking data and not putting users’ personaldata at risk. We chose a controlled lab experiment whereusers were primed to complete the security task, clearlysacrificing ecological validity but allowing us to collectmore precise data. Our results provide an upper-boundon users’ ability to detect phishing websites, which issomewhat concerning given that users correctly identi-fied only 53% of phishing websites on average. More-over, participants frequently identified phishing web-sites correctly but for completely wrong reasons. Forexample, they would say that the content on the pagewas outdated or poorly designed, when in fact we hadconstructed an exact replica of the website in question.We also note that users’ technical proficiency scoresprovide only a rough approximation of users’ technicalexperience and that a more thorough assessment maylead to different outcomes.

Further investigation is needed to determine how toeffectively use eye tracking data in phishing studies.First, given that some of the AOIs are relatively smallon the screen, the eye tracking margin of error may haveimpacted results. Furthermore, dwell times on an areaof the screen do not necessarily reflect users’ level ofunderstanding of security cues. Conversely, a short fix-ation on an area does not necessarily indicate that a user

has missed that element. While we analyzed how longparticipants fixated on various areas of interest and sup-plemented this data with our observations and partici-pant comments, there may also be different ways of in-terpreting the data.

The use of Windows XP on our eye tracking com-puter caused some minor technical issues. One partici-pant noticed a visual glitch in the title bar of the browser,which was caused by Windows XP. Another participantalso became suspicious when she noticed that the alertboxes produced by the browser looked slightly differentthan on her computer, not knowing that the differencewas due to the operating system. A final issue relatedto the way in which Windows handles multiple moni-tors. The desktop was equipped with two displays, withthe main display being used by the experimenters andthe secondary display (containing the eye tracker) beingused by the participant. Since Windows does not dis-play a taskbar on the secondary display, this was oneless visual cue for participants when they viewed phish-ing websites with a popup window. Windows 7, how-ever, groups all windows into a single taskbar button, soit is not clear whether the taskbar would have helped inthis regard.

5. Discussion

In this section, we present our thoughts with respectto the different types of attacks tested, discuss how ourstudy compares to previous work, reflect on the useful-ness of including eye tracking data in this type of study,and suggest a few recommendations for future designs.

5.1. Performance compared to expectationsFor each type of phishing website tested, we now dis-

cuss whether the results of our study matched our initialexpectations. First, we found (see Table 3) that usersquite consistently spent nearly a minute specifically ex-amining the websites (chrome and content AOIs) forclues about their legitimacy and spent additional timeon other aspects before coming to a decision. Basedon this and on our observations, participants were reallypaying attention and trying to make correct decisions.Participants were capable of detecting certain types ofphishing attacks, but the majority of attacks went unde-tected even in our study where users were specificallyattempting to detect phishing websites.

Type 1) Spoof website with incorrect URL: Al-though users spent more time looking at the URL thanany other chrome security indicator, it did not necessar-ily translate into sound phishing determinations. Par-ticipants did poorly in identifying fraudulent banking

16

Page 17: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

websites (even their own bank) and did not recognizeerroneous Carleton URLs. Fewer than half noticed thefake Paypal website with the lock Favicon. Participantswere most successful at recognizing that the Govern-ment of Canada website asking them enroll in IdentityTheft protection was malicious, although this may bebecause the website asked for a social insurance numberrather than because participants recognized a suspiciousURL.

Type 2) Spoof website with IP address in URL:We had expected that most participants would recog-nize this well-established phishing technique. However,only 38% of participants noticed that the Paypal web-site had an IP address instead of a URL and deemed thissuspicious. This is particularly concerning because thistype of attack is fairly apparent even with little knowl-edge of how to parse URLs. This is also a commontip given to users on how to detect phishing attacks; itseems that the message has not reached this audience oris an unreasonable expectation of users.

Type 3) Fake chrome: 62% of participants recog-nized that the Amazon website was a phishing attempt.We had expected a higher success rate given the obvi-ous double chrome and URL. It appears that many usersstill do not have clear expectations of how the chromeshould look and do not find a double URL bar to be sus-picious. This may be due to the popularity of browsertoolbars; users are so accustomed to a cluttered chromethat they have little expectation of what should be there.

Type 4) Popups asking for credentials: Partici-pants were even more likely to be fooled by the fakepopup login window appearing on the Carleton web-site (38% success rate), despite the fact that the windowhad a clearly erroneous URL. Participants spent nearly aminute examining the page content and chrome but stillincorrectly deemed them trustworthy.

Type 5) Overlaid popup windows: Once again,participants saw a double URL bar as the popup phish-ing Facebook page covered all of the real page con-tent. The legitimate URL was displayed in the mainchrome, but a fake URL was visible in the popup win-dow. We found that 62% of participants recognized thisattack. Although this was one of the higher detectionrates for phishing websites, it is still lower than expectedgiven that 86% of our participants were regular Face-book users.

Type 6) Fraudulent based on context: 95% of par-ticipants successfully identified the Credit Card Checkeras a phishing website. We suspect that most users haveheard about phishing through the media and that thetypical message is to be careful of websites asking forcredit card numbers or personal information. It is en-

couraging to note that all but one user recognized thisscam, but it is also worrying that none of the other at-tacks were similarly detected.

Type 7) Legitimate websites: Eight out of 10 legiti-mate websites had success rates of over 80%. This em-phasizes that users may be better at recognizing whenthings appear “normal” than at identifying when thingsare suspicious. Especially noteworthy is the ResearchSurvey website. Although the Research Survey was alegitimate website, it had several indicators that made itquestionable: it was hosted at a relatively obscure andunfamiliar sub-domain of Carleton.ca, searching withGoogle did not return any related search results, and itwarned users about a self-signed certificate. This web-site serves to emphasize that these decisions are some-times nearly impossible for users to make; phishingwebsites give no warnings and appear familiar, whilelegitimate websites may prompt warnings and be rel-atively unpopular. In this particular case, participantswere wise to be cautious.

5.2. Comparison with previous workWe set out to investigate whether improvements to

browser security indicators and phishing awareness overthe last decade have resulted in decreased susceptibilityto phishing for everyday users. Nearly a decade ago,Dhamija et al. [13] demonstrated that users were easilyfooled and that security indicators needed improvement.Their study relied on observation and user comments todetermine how users made decisions.

In our study, we additionally collected eye trackingdata to help understand how users determine the legiti-macy of websites. Overall, our results show that usersstill pay very little attention to security indicators, givemost weight to website content when making decisions,and are frequently deceived by phishing websites. Oureye tracking data provides empirical evidence that par-ticipants spend only 6% of their time glancing at se-curity indicators, with most of that time spent examin-ing the URL. Interestingly, our participants were alsovery confident in their decisions, with similar certaintyscores to Dhamija et al.’s study.

We noted some progress since the earlier study, inparticular with respect to knowledge about phishing.Dhamija et al. found that some participants were un-aware that spoofing websites was even possible. In con-trast, our participants were all familiar with the idea ofphishing and knew that this type of attack was possible.Browsers have made considerable changes to their se-curity indicators to help users detect phishing and theseappear to have had some effect on users’ ability to de-tect phishing. Our participants performed modestly bet-

17

Page 18: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

ter than in the earlier study, with an average success rateof 64% compared to 58% in Dhamija et al.’s study. Adirect comparison is difficult since the test websites dif-fered, however, we believe that our websites were sim-ilarly representative of phishing attacks. It is also un-clear how much of this improvement is due to improvedbrowser interfaces as opposed to increased awareness ofthe threat.

5.3. Usefulness of eye tracking data

We found the eye tracking data to be a valuablesource of information for this particular context. Giventhe already artificial scenario of asking users to de-tect phishing websites, relying on users’ self-reflectionof their decision-making process seemed problematicsince they were aware that we were looking for spe-cific behaviours and may have modified their answersaccordingly. The eye tracker provided a more directsource of data. Participants may still have paid more at-tention to security indicators than they would normally,but they actually had to look at the indicators rather thansimply telling us that they had. The eye tracker also cap-tured passing glances that may have helped in the deci-sion making process even if participants were unaware.

Our eye tracking results show that users paid littleattention to security indicators. It is worth noting thateven when they see the indicators, there is no guaranteethat this translates into useful information for the user.Participants may not have known how to interpret whatthey were seeing or may have misunderstood its mean-ing. Secondly, some of our AOIs were small and the ac-curacy of the eye tracker may have impacted the results.However, overall we found it to be a useful tool forassessing phishing susceptibility when combined withour other data collection methods. The different sourcesof data largely aligned with each other, giving us someconfidence in the reliability of our results.

5.4. Recommendations

The majority of our participants were pleased to havecompleted our study, and expressed their gratitude forhaving learned how to better protect themselves online.They were very willing to provide us feedback both dur-ing and immediately after our session. Based on partic-ipant feedback, our observations, and on the eye track-ing data, we devised some suggestions which we be-lieve have a potential to improve the usability of webbrowsers for identifying phishing websites. These sug-gestions should be investigated in future work, since fur-ther user studies would be required to assess their effec-tiveness.

User-friendly URLs: Although the majority of par-ticipants at least occasionally attempted to use the URL,they did not have enough knowledge of the structureof URLs to make an informed decision. In fact, onlyone participant understood how a sub-domain worked.The remainder were surprised during the debriefing ses-sion when we informed them that anybody could buyan arbitrary domain such as evil.com and then set upa subdomain such as paypal.evil.com with little effort.Therefore, we believe that the URL bar should be mademore user-friendly. Current browsers (and the versionof Chrome used in this study) differentiate the top-leveldomain by colouring it black and the remainder of theURL in grey. None of our participants noticed this vi-sual cue. We suggest that the domain names needs to besignificantly more visually distinct in order to be effec-tive as a security cue. Alternatively, browsers could use“breadcrumbs”, as in file managers of many operatingsystems (e.g., Windows 7 and Ubuntu). In this way, theactual domain name of the website could be displayedmore prominently, and users who wish to view the entireURL could simply click on the URL bar.

Legitimate websites that use different domains fordifferent sections of their website are also problematic.In these cases, users become accustomed to ignoring theURL since it provides no useful information to them. Ifinspection of URLs is to be advocated as a way of de-tecting phishing, then URLs must be reliable and con-sistent to avoid training users to avoid this cue.

Visual Aids for Browsing: Many participants madedecisions on a phishing website’s legitimacy while theywere actually examining the legitimate website itself.This was because all of the links on our phishing pagespointed to the original website. Therefore, we believeit would be beneficial to develop an indicator which in-forms the user when they move from one domain to an-other. We concede that it is difficult to come up with anindicator that is both effective and non-obtrusive. Manywebsites warn users when they click on a link that leadsto an external website. This addresses the reverse issue,where fraudulent links may be posted on a legitimatewebsite. Nevertheless, if this idea were implemented onevery website, it would be much more cumbersome tobrowse the web. We believe that a better implementa-tion of this suggestion could instead make use of a vi-sualization (possibly in the browser’s chrome) where itis easy for users to notice changes even if they are notpaying close attention. This approach would also avoidsituations where phishing websites intentionally leaveout warnings about navigating to a different domain.

Moving Authentication to the Chrome: Althoughwe have seen that users pay some attention to the

18

Page 19: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

browser chrome in a controlled lab study environment,they may not always be as attentive during regular In-ternet use. One way of reinforcing the trustworthinessand importance of the chrome would be to move someimportant tasks to the chrome, such as user authentica-tion. Moving authentication to the chrome may makeit easier to inform the user about the legitimacy of thecurrent website. This is, however, a difficult task whichfaces many obstacles due to the extensive collaborationthat would be required between browser developers andweb developers. Mozilla was pursuing a browser-basedauthentication approach with Mozilla Persona [35], butthis no longer appears to be under active development.

Automate as much as possible: Our results con-firm that identifying phishing at the user interface is anextremely difficult task. A decade of improvements inbrowser security indicators and in user education cam-paigns has yielded only a 6% increase in detection ratesby users in the best case. Each of the cues and heuris-tics that users may use to identify fraudulent websitescan also be used maliciously to deceive users in phish-ing attacks. If we cannot provide users with tools andcues that are sufficiently reliable, then users should notbe entrusted with making these decisions.

6. Conclusion

We have conducted an empirical study to gather evi-dence on what strategies users employed to determinethe legitimacy of websites. We have relied both onself-reported qualitative data and on eye tracking datato determine whether improved browser security cueshave led to improved ability to detect phishing attacks.Contrary to Vishwanath et al.’s suggestion, users wereunable to reliably detect phishing even when alert andvigilant. We found that even in our controlled lab en-vironment, participants had an average success rate of53% for identifying phishing websites, essentially thesame as if users took a random guess. When makingdecisions about the legitimacy of websites, participantsspent only 6% of the time looking at security indicatorsin the browser chrome and 85% of the time looking atthe contents of the webpage.

We found a correlation between participants’ perfor-mance scores and the time which they spent lookingat chrome elements, indicating mildly positive resultsfor security indicators. Other variables, such as users’general technical proficiency, did not correlate with im-proved performance scores. The most effective strategyfor detecting phishing websites combined searching forthe website using a search engine and testing for bro-ken website functionality. However, the vast majority of

users still relied primarily on the superficial appearanceof the website content. Although modest improvementswere observed compared to Dhamija et al. [13]’s earlierwork, we find that existing browser cues remain insuffi-cient to help users protect themselves against phishing.We identified areas for potential user interface improve-ments, and made recommendations for future designs.

Acknowledgements

Sonia Chiasson holds a Canada Research Chair inHuman Oriented Computer Security. She acknowledgesthe Natural Sciences and Engineering Research Councilof Canada (NSERC) for funding the Chair and her Dis-covery Grant. Furkan Alaca acknowledges a graduatescholarship from NSERC.

References

[1] Abbasi, A., Zahedi, F., Chen, Y., 2012. Impact of anti-phishingtool performance on attack success rates, in: International Con-ference on Intelligence and Security Informatics (ISI), IEEE. pp.12–17.

[2] Anti-Phishing Working Group, 2014. Phishing activity trendsreport - 4th quarter 2013. http://apwg.org/resources/

apwg-reports.[3] Anti-Phishing Working Group, August 2014a. http://apwg.

org/.[4] Anti-Phishing Working Group, August 2014b. http://

phish-education.apwg.org/r/about.html.[5] Arachchilage, N.A., Love, S., Scott, M.J., 2012. Designing

a Mobile Game to Teach Conceptual Knowledge of Avoiding’Phishing Attacks’. International Journal for e-Learning Secu-rity 2, 127–132.

[6] Arachchilage, N.A.G., Love, S., 2013. A game design frame-work for avoiding phishing attacks. Computers in Human Be-havior 29, 706–714.

[7] Bank of America, August 2014. Sitekey.https://www.bankofamerica.com/privacy/

online-mobile-banking-privacy/sitekey.go.[8] Bergholz, A., De Beer, J., Glahn, S., Moens, M.F., Paaß, G.,

Strobel, S., 2010. New filtering approaches for phishing email.Journal of Computer Security 18, 7–35.

[9] Burns, M.B., Durcikova, A., Jenkins, J.L., 2013. What kind ofinterventions can help users from falling for phishing attempts:A research proposal for examining stage-appropriate interven-tions, in: 46th Hawaii International Conference on System Sci-ences, IEEE.

[10] Chou, N., Ledesma, R., Teraguchi, Y., Mitchell, J.C., 2004.Client-side defense against web-based identity theft, in: Pro-ceedings of Network and Distributed System Security (NDSS)Symposium, Internet Society.

[11] Cormack, G.V., 2008. Email spam filtering: A systematic re-view. Foundations and Trends in Information Retrieval 1, 335–455.

[12] Dhamija, R., Tygar, J.D., 2005. The battle against phishing:Dynamic security skins, in: Proceedings of the Symposium onUsable Privacy and Security(SOUPS), ACM. pp. 77–88.

19

Page 20: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

[13] Dhamija, R., Tygar, J.D., Hearst, M., 2006. Why phishingworks, in: Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI), ACM Press.

[14] Downs, J.S., Holbrook, M., Cranor, L.F., 2007. Behavioral re-sponse to phishing risk, in: Proceedings of the APWG eCrimeResearchers Summit, ACM.

[15] Egelman, S., 2009. Trust Me: Design Patterns for ConstructingTrustworthy Trust Indicators. Ph.D. thesis. Ph.D. Thesis, Uni-versity of California Berkeley.

[16] Fette, I., Sadeh, N., Tomasic, A., 2007. Learning to detect phish-ing emails, in: Proceedings of the International Conference onWorld Wide Web (WWW), ACM. pp. 649–656.

[17] Fu, A.Y., Wenyin, L., Deng, X., 2006. Detecting phishing webpages with visual similarity assessment based on earth mover’sdistance (emd). Transactions on Dependable and Secure Com-puting (TDSC) 3, 301–311.

[18] Garfinkel, S.L., Margrave, D., Schiller, J.I., Nordlander, E.,Miller, R.C., 2005. How to make secure email easier to use,in: Proceedings of the SIGCHI Conference on Human Factorsin Computing Systems (CHI), ACM. pp. 701–710.

[19] Government of Canada, August 2014. https://www.

getcybersafe.gc.ca/index-eng.aspx.[20] Herzberg, A., 2009. Why Johnny can’t surf (safely)? attacks

and defenses for web users. Computers & Security 28, 63 – 71.[21] Hong, J., 2012. The state of phishing attacks. Communications

of the ACM 55, 74–81.[22] Jagatic, T.N., Johnson, N.A., Jakobsson, M., Menczer, F., 2007.

Social phishing. Communications of the ACM 50, 94–100.[23] Jakobsson, M., Tsow, A., Shah, A., Blevis, E., Lim, Y.K., 2007.

What instills trust? a qualitative study of phishing, in: Proceed-ings of the International Workshop on Usable Security (USEC),Springer-Verlag. pp. 356–361.

[24] Kirda, E., Kruegel, C., 2006. Protecting users against phishingattacks. The Computer Journal 49, 554–561.

[25] Kirlappos, I., Sasse, M.A., 2012. Security education againstphishing: A modest proposal for a major rethink. IEEE Securityand Privacy 10, 24–32.

[26] Kumaraguru, P., Cranshaw, J., Acquisti, A., Cranor, L., Hong,J., Blair, M.A., Pham, T., 2009. School of phish: A real-worldevaluation of anti-phishing training, in: Proceedings of the Sym-posium on Usable Privacy and Security (SOUPS), ACM.

[27] Kumaraguru, P., Rhee, Y., Sheng, S., Hasan, S., Acquisti, A.,Cranor, L.F., Hong, J., 2007. Getting users to pay attention toanti-phishing education: Evaluation of retention and transfer,in: Proceedings of the Anti-phishing Working Group’s AnnualeCrime Researchers Summit (eCrime), ACM. pp. 70–81.

[28] Kumaraguru, P., Sheng, S., Acquisti, A., Cranor, L.F., Hong, J.,2010. Teaching johnny not to fall for phish. ACM Transactionson Internet Technology 10, 7:1–7:31.

[29] Lee, L.H., Lee, K.C., Juan, Y.C., Chen, H.H., Tseng, Y.H., 2014.Users’ behavioral prediction for phishing detection, in: Pro-ceedings of the Companion Publication of the 23rd InternationalConference on World Wide Web, International World Wide WebConferences Steering Committee. pp. 337–338.

[30] Li, L., Berki, E., Helenius, M., Ovaska, S., 2014. Towards acontingency approach with whitelist-and blacklist-based anti-phishing applications: what do usability tests indicate? Be-haviour & Information Technology , 1–12.

[31] Li, L., Helenius, M., 2007. Usability evaluation of anti-phishingtoolbars. Journal in Computer Virology 3, 163–184.

[32] Lin, E., Greenberg, S., Trotter, E., Ma, D., Aycock, J., 2011.Does domain highlighting help people identify phishing sites?,in: Proceedings of the SIGCHI Conference on Human Factorsin Computing Systems(CHI), ACM. pp. 2075–2084.

[33] Mayhorn, C.B., Nyeste, P.G., 2012. Training users to counter-

act phishing. Work: A Journal of Prevention, Assessment andRehabilitation 41, 3549–3552.

[34] Moore, T., Clayton, R., 2007. Examining the impact of websitetake-down on phishing, in: Proceedings of the Anti-phishingWorking Group’s Annual eCrime Researchers Summit, ACM.pp. 1–13.

[35] Mozilla, August 2014. https://www.mozilla.org/en-US/persona/.

[36] NSS Labs, 2013. Evolutions in Browser Security: Trends inBrowser Security Performance. Technical Report. NSS Labs.

[37] OpenDNS, August 2014. http://www.phishtank.com/.[38] Purkait, S., 2012. Phishing counter measures and their

effectiveness–literature review. Information Management &Computer Security 20, 382–420.

[39] Ross, B., Jackson, C., Miyake, N., Boneh, D., Mitchell, J.C.,2005. Stronger password authentication using browser exten-sions, in: Usenix Security Symposium, pp. 17–32.

[40] Sheng, S., Holbrook, M., Kumaraguru, P., Cranor, L.F., Downs,J., 2010. Who falls for phish?: A demographic analysis of phish-ing susceptibility and effectiveness of interventions, in: Pro-ceedings of the SIGCHI Conference on Human Factors in Com-puting Systems (CHI), ACM. pp. 373–382.

[41] Sheng, S., Magnien, B., Kumaraguru, P., Acquisti, A., Cranor,L.F., Hong, J., Nunge, E., 2007. Anti-phishing phil: The de-sign and evaluation of a game that teaches people not to fall forphish, in: Proceedings of the Symposium on Usable Privacy andSecurity (SOUPS), ACM. pp. 88–99.

[42] Sheng, S., Wardman, B., Warner, G., Cranor, L., Hong, J.,Zhang, C., 2009. An empirical analysis of phishing blacklists,in: Proceedings of the Conference on Email and Anti-Spam(CEAS).

[43] Sobey, J., Biddle, R., Oorschot, P., Patrick, A.S., 2008. Ex-ploring user reactions to new browser cues for extended valida-tion certificates, in: Proceedings of the 13th European Sympo-sium on Research in Computer Security (ESORICS), Springer-Verlag, LNCS, Berlin, Heidelberg. pp. 411–427.

[44] Vishwanath, A., Herath, T., Chen, R., Wang, J., Rao, H.R.,2011. Why do people get phished? testing individual differ-ences in phishing vulnerability within an integrated, informationprocessing model. Decision Support Systems 51, 576–586.

[45] Whalen, T., Inkpen, K.M., 2005. Gathering evidence: useof visual security cues in web browsers, in: Proceedings ofGraphics Interface (GI), Canadian Human-Computer Commu-nications Society. pp. 137–144.

[46] Whittaker, C., Ryner, B., Nazif, M., 2010. Large-scale auto-matic classification of phishing pages, in: Network and Dis-tributed System Security Symposium (NDSS).

[47] Whitten, A., Tygar, J.D., 1999. Why Johnny can’t encrypt:A usability evaluation of PGP 5.0, in: Proceedings of the 8thUSENIX Security Symposium, USENIX Association, Berkeley,CA, USA.

[48] Workman, M., 2008. Wisecrackers: A theory-grounded inves-tigation of phishing and pretext social engineering threats to in-formation security. Journal of the American Society for Infor-mation Science and Technology 59, 662–674.

[49] Wright, R.T., Marett, K., 2010. The influence of experiential anddispositional factors in phishing: An empirical investigation ofthe deceived. Journal of Management Information Systems 27,273–303.

[50] Wu, M., Miller, R.C., Garfinkel, S.L., 2006. Do security tool-bars actually prevent phishing attacks?, in: Proceedings of theSIGCHI Conference on Human Factors in Computing Systems(CHI), ACM. pp. 601–610.

[51] Yahoo! Inc, August 2014. Personalized sign-in seals. https:

//protect.login.yahoo.com/.

20

Page 21: Why phishing still works: user strategies for combating phishing … · 2020. 9. 23. · Why phishing still works: user strategies for combating phishing attacks Mohamed Alsharnouby,

[52] Yee, K.P., Sitaker, K., 2006. Passpet: Convenient passwordmanagement and phishing protection, in: Proceedings of theSymposium on Usable Privacy and Security (SOUPS), ACM.pp. 32–43.

21