Adapting a Language Model for Controlled Affective Text ...

15
Proceedings of the 28th International Conference on Computational Linguistics, pages 2787–2801 Barcelona, Spain (Online), December 8-13, 2020 2787 Adapting a Language Model for Controlled Affective Text Generation Tushar Goswamy * Ishika Singh * Ahsan Barkati * Ashutosh Modi Indian Institute of Technology Kanpur (IITK) {tgoswamy, ishikas, ahsanb}@iitk.ac.in [email protected] Abstract Human use language not just to convey information but also to express their inner feelings and mental states. In this work, we adapt the state-of-the-art language generation models to generate affective (emotional) text. We posit a model capable of generating affect-driven and topic focused sentences without losing grammatical correctness as the affect intensity increases. We propose to incorporate emotion as prior for the probabilistic state-of-the-art text generation model such as GPT-2. The model gives a user the flexibility to control the category and intensity of emotion as well as the topic of the generated text. Previous attempts at modelling fine-grained emotions fall out on grammatical correctness at extreme intensities, but our model is resilient to this and delivers robust results at all intensities. We conduct automated evaluations and human studies to test the performance of our model, and provide a detailed comparison of the results with other models. In all evaluations, our model outperforms existing affective text generation models. 1 Introduction Affect (emotion) in language plays a critical role in conveying mental and emotional states, along with the information intended to be conveyed. Machine Learning (ML) based text generation models are focused on minimising the error in the generated text by maintaining grammatical correctness, this often results in the generation of monotonous and dull conversations. Since current ML based models are trained on large corpora without any explicit affective information, they are unable to capture the emotional aspects of conversations explicitly (Asghar et al., 2018). There is a need for the automated processing and generation of affect-driven language. This will improve the quality of responses generated by a conversational system by making them more empathetic towards a human user (Colombo et al., 2019). The role of controlled emotion in the generated text is to ensure more meaningful conversations between an AI-agent and a human. It is also aimed at establishing an emotional connect with the reader of the text. Such a model can be particularly useful for conversational therapy bots for generating appropriate emotional responses based on the user’s mental state (Spring et al., 2019). Affective generation model could also be useful in the development of interactive virtual agents (Janghorbani et al., 2019). Affective advertisement generation is another area of application, especially for social advertisements aimed at inviting donations for a cause, where we need to emotionally appeal to the benefits of donation (Moran and Bagchi, 2019). Current research on incorporating language in robot learning (Luketina et al., 2019; Bisk et al., 2020) can potentially benefit not just by enabling effective human-robot dialog, but also by optimizing other reinforcement learning based components. For instance, user’s responses can be exploited to extract implicit emotion-based reward cues (Sumers et al., 2020), and the robot can respond in contrasting emotion to gain user’s trust. In interactive narrative generation (Ammanabrolu et al., 2020), an affect-incorporated language model will improve user’s experience by making a long-lasting impact (Mar et al., 2011). In this paper, we propose controlling the output of a neural language generation models with affective information. Intuitively, an emotion cannot be captured just by a discrete category, but there’s also an * The three authors contributed equally to this work. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.

Transcript of Adapting a Language Model for Controlled Affective Text ...

Proceedings of the 28th International Conference on Computational Linguistics, pages 2787–2801Barcelona, Spain (Online), December 8-13, 2020

2787

Adapting a Language Model for Controlled Affective Text Generation

Tushar Goswamy∗ Ishika Singh∗ Ahsan Barkati∗ Ashutosh ModiIndian Institute of Technology Kanpur (IITK)

{tgoswamy, ishikas, ahsanb}@[email protected]

Abstract

Human use language not just to convey information but also to express their inner feelings andmental states. In this work, we adapt the state-of-the-art language generation models to generateaffective (emotional) text. We posit a model capable of generating affect-driven and topic focusedsentences without losing grammatical correctness as the affect intensity increases. We proposeto incorporate emotion as prior for the probabilistic state-of-the-art text generation model suchas GPT-2. The model gives a user the flexibility to control the category and intensity of emotionas well as the topic of the generated text. Previous attempts at modelling fine-grained emotionsfall out on grammatical correctness at extreme intensities, but our model is resilient to this anddelivers robust results at all intensities. We conduct automated evaluations and human studies totest the performance of our model, and provide a detailed comparison of the results with othermodels. In all evaluations, our model outperforms existing affective text generation models.

1 IntroductionAffect (emotion) in language plays a critical role in conveying mental and emotional states, along with theinformation intended to be conveyed. Machine Learning (ML) based text generation models are focusedon minimising the error in the generated text by maintaining grammatical correctness, this often resultsin the generation of monotonous and dull conversations. Since current ML based models are trained onlarge corpora without any explicit affective information, they are unable to capture the emotional aspectsof conversations explicitly (Asghar et al., 2018).

There is a need for the automated processing and generation of affect-driven language. This willimprove the quality of responses generated by a conversational system by making them more empathetictowards a human user (Colombo et al., 2019). The role of controlled emotion in the generated textis to ensure more meaningful conversations between an AI-agent and a human. It is also aimed atestablishing an emotional connect with the reader of the text. Such a model can be particularly useful forconversational therapy bots for generating appropriate emotional responses based on the user’s mentalstate (Spring et al., 2019). Affective generation model could also be useful in the development ofinteractive virtual agents (Janghorbani et al., 2019). Affective advertisement generation is another areaof application, especially for social advertisements aimed at inviting donations for a cause, where weneed to emotionally appeal to the benefits of donation (Moran and Bagchi, 2019). Current research onincorporating language in robot learning (Luketina et al., 2019; Bisk et al., 2020) can potentially benefitnot just by enabling effective human-robot dialog, but also by optimizing other reinforcement learningbased components. For instance, user’s responses can be exploited to extract implicit emotion-basedreward cues (Sumers et al., 2020), and the robot can respond in contrasting emotion to gain user’s trust.In interactive narrative generation (Ammanabrolu et al., 2020), an affect-incorporated language modelwill improve user’s experience by making a long-lasting impact (Mar et al., 2011).

In this paper, we propose controlling the output of a neural language generation models with affectiveinformation. Intuitively, an emotion cannot be captured just by a discrete category, but there’s also an

∗ The three authors contributed equally to this work.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/.

2788

associated degree to which it is expressed or perceived. Hence, we want to generate text based on a givenemotion category (e) and its intensity (β). In addition to this, we also want the generated sentences tofall under a topic (t). We can either extract e and β from the context of conversation/text or can allowthe user to choose these parameters. While generating affective text, we want to ensure that grammaticalcorrectness is not compromised even at high emotion intensities. We propose an algorithm that generatesgrammatically correct text by sampling valid sentences along with optimizing for emotion label-intensityfactors.

In particular, we propose coarse and fine-grained affective text generation model, built on top ofGPT-2 (Radford et al., 2019). Our model provides degrees of freedom in terms of the choice of thebase text generation model, the emotion category (ranging over 8 basic emotions), with fine-grainedcontrol over emotion intensity for each category, and the topic of the generated text. We providedetailed results of our model and a comparison with the existing models to establish the improvementbrought in by our approach. We evaluate our model against the baselines on grammatical correctness,perceived emotion and intensity both using automated methods and human-annotations. We clearly seethat the quality of text generated by out model both in terms of perceived emotion and grammaticalcorrectness is considerably better than the existing system: AffectLM (Ghosh et al., 2017). As observedin experiments (§ 5.2), in the case of AffectLM, with the increase in emotion intensity, the modelcompensates by generating more affective words at the cost of drop in grammatical correctness. However,our model tries to generate text as aligned to the given controls as possible while adhering to thegrammatical constraints. To the best of our knowledge, this is the first affective text generation modelthat incorporates 8 emotion categories in the text generation output. The model is robust in terms ofgrammatical correctness at high emotion intensities, which makes it highly reliable for a number ofapplications. We release the model implementation and user studies at the following GitHub repository:https://github.com/ishikasingh/Affective-text-gen

2 Related WorkRecently, several advancements in language generation have been made. The Conditional TransformerLanguage Model For Controllable Generation (CTRL) (Keskar et al., 2019) provides a transformerlanguage model that is conditioned on control codes, which allow the user to control the domain andtopic of generated sentences, as well as define the intended task (like question-answering and machinetranslation). However, the CTRL model only allows to control the topic and does not provide theflexibility to control the emotion of the generated text. The Plug and Play Language Models (PPLM)(Dathathri et al., 2020) combines a pre-trained language model like GPT-2 with attribute classifiers thatguide text generation. It enables the user to control the topic, sentiment (positive/negative) and thestrength of the influence of these attributes (using the stepsize of a gradient descent equation) for thegenerated sentences. The PPLM model only allows the option of positive/negative sentiments in theoutput and does not deal with varied emotions. Moreover, PPLM model fails to generate grammaticaltext, when emotion intensity is increased. In contrast, in our model we have an extended list of eight basicemotions along with provision for controlling the intensity associated with each emotion. For controllingthe intensity, we use human-annotated word list from NRC-EIL lexicon (Mohammad, 2018).

In recent times, neural models for emotional text generation have been proposed. Affect-LM (Ghoshet al., 2017) uses an LSTM-based approach for generating expressive emotional text. It is capableof generating sentences in 4 affect categories (Positive, Anxious, Sadness and Anger), and the affectintensity can be varied on a scale of 0 to∞. However, since its introduction, several new text generatinglanguage models have been proposed (e.g., GPT-2 (Radford et al., 2019)) which have outperformedprevious RNN based language generation models. The Affect-LM model depreciates in the grammaticalcorrectness of its generated sentences as the affect intensity is increased to the higher end of thespectrum. Moreover, Affect-LM provides only 4 affect categories and misses out on emotions likesurprise, anticipation, etc. In contrast, in our work we provide 8 basic emotion categories (Joy, Trust,Fear, Surprise, Sadness, Disgust, Anger and Anticipation). We base our choice of basic emotions on thetheory proposed by Plutchik (Plutchik, 1962; Plutchik, 1980; Plutchik, 1994), that argues that the eightbasic emotions (largest proposed set) form four opposing pairs: joy–sadness, anger–fear, trust–disgust,

2789

0.0 0.2 0.4 0.6 0.8 1.0

disapprovalremorse

contempt awe

submission

loveoptimism

aggressiveness

pensiveness

annoyance anger rage

ecstasy

joy

serenity

terror fear apprehension

admiration

trust

acceptance

vigilance

anticipation

interest

boredom

disgust

loathing amazement

surprise

distractionsadness

grief

Anticipation

Surprise

Trust

Fear

Joy

Anger

Disgust

Sadness

0 1.0

0.6

Knob ( )<latexit sha1_base64="6rZlRC2eUYKbDwYbTAic2BEapaE=">AAAGAnicfVRfj9Q2EA+ULXT7h4M+9sW600m0UpLdEycqpEq03AOoVFwFByexp5XXmSRuHNvYDrvbaPvUz8ErVd+qvvaDlG/DODFHs6h4tfFkfvPn55lxFlpw6yaT1xcufnRp9PHlK5+MP/3s8y+u7ly7/sSqxjA4YUooc7qgFgSXcOK4E3CqDdB6IeDporrr8acvwFiu5GO31nBW00LynDPqUDXfuT5bKJHZdY1bO1uAo5v5zt4kmXSLvC9Mg7AXhXU8v3bp31mmWFODdExQa59NJ9qdtdQ4zgRsxrPGgqasogU8Q1HSGuxZ25HfkH3UZCRXBv/SkU77X4+W1tbzQ8uautJuY175f5iPaAM4wLIO2mLm8m/PWi5140CynljeCOIU8ZUjGTfAnFijQJnheDbCSmooc1jfQaTsBdc2nHLVH3OQ3dNxSgk7VDNsKgdvewRYRwM/od1DDYY6Zb5pZ9QUNZebNuwfMqOr3gz3QYpFPXzXPNTBU/KnySCf1eValyC/2z04IvvkCHKcLULJrrchPbY7HoR57gzAZt83keH0OS4LoqmxQDxgyW1SOqft7TQFmSx5xRdKVTZRpkj9W/qAPobT9AG6NTjynNm9u0qi4BocqXkXY5hQF7kWquvt28jL5TJROOoCaJ4wVacoGJkK6mCVHgf7eQhAqMzOOaFBYh0CsMISyAI69+cNWH9HbDqdHB7emqSlWsZOxT5OnDeSdWAseAXxKs7XcWORf+x49WsXfcDXazfjLZWmSL+bfF0qCX6g+otoyQ3PLRxKJA3mrdYqoSz5Rae/5ZVJvXf3qKlMdJaTrwcTaBoBGd5AUSic1LI+AJ+q8Q0puSVcku8bnOtaZaEUYIAg4Eog+Cmp3jXsg2U9j+/b8wjcj8v7/vq0Yd+0/Ua21j5B2y5Xh2+5Pmxc5xv2bVcbXHv4Xf8ttg4vgFr3LDt+oWVL5AjxOdmYy7jHQ+ZjowqkyjEp7574G3cLv4XT7S/f+8KTg2R6mEx+vrl354fwVbwSfRXtRjeiaXQruhPdi46jk4hFq+hl9Cr6Y/T76M/RX6O/e9OLF4LPl9Fgjf55A3hBD+g=</latexit>

�Affect Intensity

Prob

abili

ty

Affect BoW

Topic BoW

X GPT-2

Vocabulary

PROMPT: Once upon

Once upon a time, Episode 2, “Danger Room” (Season 6,

episode 3), directed by Bryan Singer and …

Perturbed Vocabulary Distribution

Once upon a time, there was a game that was so great …

Once upon a time, there was a game that was so fun so much fun … I mean so much FUN, I’m not exaggerating

Affect Loss

Topic

Loss

LegalMilitaryPolitics

Monsters

ReligionScienceSpace

Technology

Topics

Plutchick’s Wheel of Emotion

<latexit sha1_base64="hG7oOdG1VBQLrH5BWLbs0aBJfzg=">AAC/r3ic7T1ddxs3dtjttt26H7tpn3ra0zPHsXOkOKQkJm6y3mh3HX+svXFkrWQnUk3HS5FDaiySw3CGkhgd9q3/aU9e/Dfap/6A/oleXAADDAaYAckhRSuWjsThDHDvxf3CBXAHOBp0gyje3Pyfn/z0L372l3/11z//m2t/+3d//w+/+OV7//h1FI6GTf95M+yGw4OjRuR3g77/PA7irn8wGPqN3lHX/+bo5B59/s2pP4yCsP8sHg/8l71Gpx+0g2Yjhlvhe/X/JTdJnRwRn3RIQPrkgjRIF646cO2TFpmQa1iiB/djcgwl21CmTpokhHIhGcK3h/jpkwhK/B6vfKg9gScH8N8FAv3fgDodqDnBp8dY+oIcIgQXGENOr6jDagyxLcdQqwHXQyh7xp8cYd0WUD0GuOwbhTmA0gF5Bdev8DvF52N9CtUja/BXhb/1BIcNkqC2iTy9IE+gRj7cebn0USEMnUuztOOhQSLiWbpd+XAF9WNOybaReikPnWdrVtjn8HQ9gWjDr0L7Fu48QyryYLL2qHQLHDYq16CeoCRCq1pLlZB4z1EW1zL2eMQ5QjVZ0iAhUJxbCOEDpCT9pKY8oW0K4UmUU5ZBp/bbsuIukifTLNqyNai3rnAp3+bWwF+ka/1ao7JIVw+5hcXcG02DWa+n46aUfYh6lZWBeMawrRnlI8p8ZOG8hJ+Wgckfz/Nrttv7Cq8uDHZ4oEnU5GkOM1Kvc3kEAFnwYKdAhmZMdnnrGCO47sHVLspBb+dBhnbx5FDhfTltn4+SOj4Tf23u/bvwS+8EXFpbCKPGIUkLp9934I9K3COP4fc+fN4nd8Hb3LX0pll9ljpBe88O1KA+6VWqV8jWpvXqnAMjKFdHDaZ1KWUmfTD5/Wl6zwm3Ol0mqkRsvaPqtdLlWdvtfZJJqg/R56f92Dr4Et2LmTXN7r+mpyANZ93Jo7lHAJTfVc4nu0dhNFC9DTAinMbD6HVdrC6Lz+yB9HJFbTjF79PTL+q50i7x2OkWZYr5ruuQO9dFTXeeS1x5HBelptOZOj4fGH1lVkIupXXe7JDn4F+eIFWz9xuDGWpmIxzP6hkYrZuZEio885ODzBOblzXF9nll7xhbdsAlYRp9qeWpp7kge1xLWqXijrHP8THaNfNqHgrnj8CKYvzPyK+gVB2vN5OrreSqDjoventx7xNyG7yyqOlZy1WgzKdQ8nYutI/hV15XAfcnueXN9/LGEvNzkVryOfbfvwe4p4jtDo8/5om7JNwG0p8H8ZD3rzpFj6FmgHrIYvjvcdxet3om6afr3I/qtKyRB1BzANdjeLKPPkMdj4hye3A1gD+Kw4MYrI9UdOGa8ieEO1Tn6Yi0gfC63E4aOD/gQYkjtIiAjzruGHDsQyT5ADhCY9A7Ca2e1j7VexVFxKa4V2//vkKxx/2mB98aaE9dIzckpbVCSg+cKBWlaEzbyvEvpthNaMs2j+tpT9eE+7Noax6GiRVmcc1ZZBChbsX4vwn/8yXxsUESstxd4GpLgU31MEKNZLBZzWzEn+WW2i4dSwx6nmeP02m8yRMxygdgb3EGU1rfKC/bcH9Ng0VHfR62W4WVtlzKqTb2XT0s68GdAPlFfe0R1gjgab+AhmKdX9f6zPW5bM3dK+Tb2oFVn9mTbIyl67fOdbOGh8hJIQHpc9gsahn9mTmClH3DcmZnWFsovoHVyhikLQds9jISTs0BjlomPc9hh7vjAHdHGY+LuVxqSy3iMm8xyOWPuzRs4wAK/7BA/+8rPcpvZ6ql9kN5OijhHWLdDzkHDnl/Y68re4OiHs+OaVfBZK8rMOntmlaybj3/rNQtNvKdZSRq95PmWNiMQ45pQxx9dRDOu8jYtQ+0c9Uz8HW1YmW71pUdK8+m3/mxch7n38XKLrHyvD7tIbSXrk88Byt9AJgewtUOuQd3HpOnuJqRnTkxrY61kzW3NaCStnEMLdpWomzmSdJwhqhdY9SFLkZ1FJa6tnqhcMTDHuUxQPMIW71lvYjUFlEyRv4dAexI8XJmnzrO1L4gX5MvyITbUJ3TtJlDU8g1YUjOcBTgI4X6yq7aXjWiry5l9bFYgmtA/aJl1sVWMkwe+Q1a2tXis9SiHRxJ9pT+lulcPVPyIfZ030FPyLzTmJfLzm/dw/6S+jEJqZ3wk+rxv+Ms42dYz+y9THR5KcjL0jhqv30DNSoty9DKM4MP2AH/9xzj0quqn7vQwn2UfAPvdlASY2xHW+ndzvB+GRTM0pvsL1z6+xkLOcaeg83rRBhFUezjK6QLNxU+1xI+Z61gH7MJhEawFT8RQYnvIx6vszJCW6KSqfw4h8oHfEzBRiLP4P8A+31aKsB5vHR5qkNfAZ2n8NTHaGWXj1JH0LIY7+wRn8d0I54DyO4/5JI4Rl0sY21KbeUnWivTFvuIY2aUeZyuE9TUHs42dpS7UiYsh4V+tshrqN1Ebp1yuTEO2eJkobEj1ESqz0d8TOATkRXBuENHSRdQn42WaF8vKJKWQX2MjjVAXSmLj7bsDMlTVor2OwFvUx9pu4t0sXncJrQ3Ql60Mhxq4ngu4jNhXcLyQ0fw2cHnbczh6OK6JoX4H1B/QNreP+M3Wv4M11FvoGzEDMkAOUilR+dDKiC/CZT4CC2K+RGf17mBFEUKRe1M/GDz4R6PfSmkoQIJbIfT18SSLLd4gBJsYftVzaqi1xzBc1oiSsZAzGNmOSKo6GHpOENxnKIwBBqaaHUR0vkJ+NlfQVxzAyHJFWbREsmFEGvFOJswG67b0GfUNExCBmz+Is156QF1jlONZ/wTWG8A526Q36I3KkvXp8kfptlDtKej7akq419zrnJ2jEr90wmx54azEjvW53Le5m3N3lY54PEZ/flyykSOX4OPZsXc1LS5Zkyqh045Z+XonXsEd76k+QAWz6rZ3yKGZ/HdB/yP2fA98Pe7UJvmZj7BWi/Az7xM5mFk6XEC7S5EDQ+uWAzoKsvaUmUp5gnO+fpnBz24B/55MyOfP8HvEY9xrhMx3mX90blVmvuoL3sg//0fqUQ/LkWicqwupFUk2x+DBNJ+foAeW3hoc++YXT88cMgSbiIvmtAr9bDNJwuMo0VvnO6JzRmBoh8rJyNQlN/L8C6fNlN+uujzRSRgKjHRvE4z8Srs79cYXwa4otbj+fYiTu8rJW35pEXYl9VGfd4jghLUmruof0doWzT6b8MvG4GwcTZdzeoqY0M3Sr8l6Szj+aSjr09toV8SbzbM8hbELHzO5hzkU256Q6so1ivKdVg3rCLL9V/z+1Kmt0vU1rrzZRFSLabtndzz5W5qW4hzFsI7MRpsszrLl72EL7zRE06xhzN1PmblSWzZVdeIj8tuad66jjMStNds4fyEvdVPk3X+NY2vWer2sOQIYQ9x5aJBRM4fnYP0cVVDePO331tlMx9uWTgs79r5uagY5es57ctuU2VQO5vPNdltPqRsBLI8O64XWqCLfpQ5Q1a0z4Jt/kv3GrYolcKmY6Yxl2H67wMNwxHCbWKkTrE84bPkJugmeFloo2R2K+07KXVf4aqGj7P/93BmnuXFnCerJWvkeeIN23g35vO97E1DuV49H582V4g7Yh3FjT9Pee9j4s5qjOfNdk6tet9o75Tv5h6J7V2QtX995tQjFcsIZ13ZX2E+OJ6ie2W2r4gq/Y1kG4Xp3vHbhHuC6nKpKC9m1D2tuX1p/LJ9tGWb3Kf71j4mTzafa15z1W3oboGOyb5Wz7acVufTUJgWzY5lvijITUtUmt01w8bRy9EMG5WPoO84Qs+fpZbFQqznENRvJ22fdk58Vh9R7IeZR3KZcS9ff7t89kzlk8BVV1quPi2bikpOL6diduFVOTPPLhG5TbvkSPLt8AaX67efQESX77lpnNjh68hNHGcdzx21rGc8N8Oi+oY1TSt93tvOg1fMfPg8f2R+aKqlZN8/vPz+Ou03WIbTFtFHF3uGkYNaxlP8vJCWbcThgU5tKX5BzV2lPJ8kc1Yyck1T2UFNWDyVNScqV0WSbN6LWuo5YbMvNBOKjRiF9aylLJStNK4bY+zzxJurGlzPxaWP4l2xqj74PNPzCP9SW5rV2HzhLmoOnecc4EwPezPZ5Bu3idx/6dyy75TuR8Rqu3mFzy2jpQjLR5a+yfQOnZf4tmmxysydIZ85mAYvG/W94+C0PFt9vc2fYS9HDlmuz4PVXZOXo7dvJwfzeVZO79jAuUrzfLo+31+WZIr35WMw+skYrki7lu+j5GhoFblzOfx4uzWSzh59rkRRZcwGpLXUBJthpD0Mfau6y/uYq6XNi+CsKy+9d7awUt55OT3x2+Odl8ePt1sjV9M7XwVtXh3vvDxuumpyHg6RxW7bwVQvW7OWrRm1zIXO/Pd/hGaXY/30TVyWKSMyGXpcH1nejuDsosc+84yx8rmZh7PO3yZk2dbZfGxZYhP5YL5bhg82zVaYs6oe8WwRX9NoNleorgdsYtsZxRXt6eJnYdxne2aTnxlnel1B/z4tr79AXlPqxiRv9nsemWTbXHXqAdR26Zo7bw8id+q6xc8mqOAKw/TZ+eVq69XzP3na6uaTrr4+l6nNPa7NvUSbY9zbIeA1usgZ1h5aa2tqHu/DCGEIGEw8TK/WvrMDNzvQv6t5Y8vte9nqmypP0zr8Fkmvm/8YZUh5ZOp9ly2vU7DGWsqLCU8hJbLFx//5NknvyRXhChG7BVxNabLdFNVcgzqX6DLpMGlRfkzX5aMXn+Nja/NCP6T8Fi+3ycxY5uGW+zsFs1uR8G+qdizftvcxfhkkVLFz+XTLZXZ6da103hFOjzRTEYqe15juy0y8vco+MNurXUbUEfGIkn3mR5Nqb7V4qdhjM6Y1xVTOp0Ozy/YWj7DzW3F5smZ7PbnKujeXhUwn8yJ503fveli7nviNWT3w7D3hcvi+Kja2Ojyf31rmOT/E1v8Vn4tafMak/v4SLSFO3hBxkX42aj6WvHkL2pr0/E8xNXlzINm3GtWzSPWTSLNnkGYlo64LbZO8kzu3HLhaU9qRfy6IqLFDxHtkZr2Q2mmj78CBvgONPreTS3QK5WqfmYdpOvLbo5fVZ8HFLmnr/L7cLScPd20K3LVc3DUj7vS51e507UxB104uXekzcRYvh8vg/6rx3XY+btZ3LfL0KB2bmcp5MC8qW2ibxxhsJ2GXE27sHC6SYhF/7G8dXT592Tmiba5VdBcU274noj3L4+Dlc8qc1SBLeLhnHz2BSp5lUrTfk8xbUWPM+oroRhmtcNOjtaVp+7x7cbm0yryjUeXSWjzhvTjb3UXsNk5b8gSpYVcBOcE5yYDQfefChb8JZR5NZsvu4MnUahZS/s5NZqlkuW+T0zKkIlq/nDHwJNF7c9u8hLevC1pm0zFdLnU+G8BGxcI/6nHlay6hZwlHXVosclhYyy6IfceuE5IeXebFIulohV3LNqTpPimB7vL2VJrHtu7hDsquluW2J5pdGtszSoLZrlwxMWV6HhLTvn4nmX7GZAH5e73MZ+nTrysUS2k5HD4h5lGBzByZhqtFnMzj3nKtRM30qSzJZvS1oWLtLo4kFqnVXrKyJLI23PZlStMt6l0O9eXlN7poiYuEp5fqbFblLr3ZJTYPZZed++ISg5bhz5ffsj3Czoxh+XXp1kybT7eLM08uu52Kfc/Y+TQTZVV6ehh9HMWYTmLXv5efA8pGSxUrF11z2laBe9mcl/IsTe4k+gq5JnbPsu2br9LORhNSbgP0sWythe0M6QbHVFfMsYtVwLqCywWmGvO7lJc7xJpHpl9xWOnRRaBQaObn1grxc0vjY35LZX+QbbN8bmr1A8LO+aKzF02Sl4G9g3Cn5RCzWrl3qmibVwL/bxmh1yzQp7OWcmy2PSXGacqbz6TY45rRyvihWU+d2JsBkqCgbIhMw+V6kXqu77RauQjPxGC/yNihsOa6wUapXr7kGOQeUkecC5RH50Z/JqzIBpPNtLCdmLLQti0Q05kcJriqD71cfWXZgmXGewGebKKPLN09uS3fQW0lLblIv+B6/qzHf+s43mJ7Uru3dtpWyN3E8rRcpWvZv3ouNtU/dg5MF9s6Md432ZBql/Ur0KbairXpNJU/sBo0lcHnlsLny2uNvdco3t1vWb6EepLy3nR295Rm+h79yLyhjBHMmi2f27zjB3NisEdDrr6MUaBmIl0eTcLuBQ2r7tGm50TNkRNl6YddA5cpf9dWr6b8TzO8yd7J8m/1+uaypdpasi7PTsHidN1uX0KX3774ZdHnldMogXKoncwFyB1u2HnrbCY6ICKTyMOV4zZhZ3DGhJ1AX4XPKodx7jBDoeJl+zf/hnjJCcP6XjsPkCeUR/LU9whPNc+ecSVnGNMQi7L509nqLbI+J9/6WgbWovj2eel8+/xS+RbwPC2VU5OFtE3CCvF9wUXr0mzy3U4wp9t/l9DTMejcPJsp93DnLB8/6ckizAPSlSgTnzzO7e+Jj/NZco5mFcYufyCT3JFaO+lD3t7xCu3FZI88zSxOUT85L+Rp+uTpcanzCaseh4nW1RYmo1khzyIjd1yrJaOrFPW7Saq3MH2bFfIs+uaOa5X0bfZIvcz+U7xb7DZTyVZjGM+DZP/MMihZ3dUnNRaKIIamEY+IpOlfJ4neppkBFSfoucd6HvhV09uu95yoPE5y48un8rUDlV8ClfbzZW30yCx/Fp1Wk+yxdITahl8foMe4zvoHqD+GctnI/xk8YXvL0XLPsM4xZnuwLKsx6sjk1S/f36xu4o+XvdjiF+//7l///Mf/+69/+/Puq/d+9t/1Vtgc9fx+3Ow2oujF1uYgfnnRGMZBs+tPrtVHkT9oNE8aHf8FXPYbPT96edEMu+Fw4t2EOy2vHQ7hrx97eFetcdHoRdG4dwQle434ONKf0Zu2ZxRixB+mnrXwkUZZ3P7s5UXQH4xiv99khLVHXS8OvXg88L1WMPSbcXcMF43mMIC2ec3jxrDRjP1hCvOL1mkwiHgrz1kzU9gpOXEYdqP07WY4GgY+LXvfBz4O/a+g3NOBP2zE4fDDi3pj2OkF/ckF/8wr1jhnxeAzheKol/4+CDgfKEm0NS2/Xe8djwfHfn/7eu2+d9O777eDvu81vOu0jMeeXb+WAvNdPPT9yU0qxObQb8RBv+MNGsPI9+iDyLvjHcfxILqzseH3q2fBSXAUhidRNRx2Nui3jSeNZ/7BxhOoNgoiYGz0/r2wDxfxCFTqFcJIIxx02oNuiLIVkM/OzqrhqT/s+o12tRn2NuBi2N/oNmL/fGOXl3/FAXiNfiuhCQpUoxge+OfAgn7Hx+rfjXygAMjY2Nq8ffvTzY3j8KwShxUKp9Ie9Zv4sNINTvzKeaU9rowioL8SByffI/QUvfTu5Jp2a9AA8lHzB8dh36cKRRUd9MJbo7TxRnWrI8B7Mg6rjWb19WDjP9snww1aG//1Gv3qoNX21lMaOBx1/RZYYLcTgqYe92o+RTWiAjkOIi/oe3dHoNe9sMVZ4Q99Dx7Ex77XDfonUmC5bE3gU/Hs+/GXZ4+p+Vzwz8kF+/C0n5selEVc+Fyr+nQUY13+qVeNeFX2WMo/AtGBAYRjRiXSx0V2BjT6lYTYStCvsOcc8+4w7ACpASAN8D/8XsMf8IVbuufLXnxdq27drm7+EZziF4T9/Jz8C7mO2d2fkt+RR2SXPCfNH27+8OUPz354/mbrzTdvvn3zJ1b0pz/hdf6JpH7eBP8P1FBEaQ==</latexit>

� = 0.0, A↵ect: Joy, Topic: Technology

<latexit sha1_base64="wA469KgNrEr6iijQW3MJeuw/Zx0=">AADAXXic7T1ddyO3dYiTtOk2TeP2oaenPT1z1l4fyWtSEu21nY2VZL0f2U3WWkXadaQu11uKHFKzIjk0ZyiJ1mHf+p9ykof6P/SlfeoPyC/oWy8ugAEGA8yA5JDiyisdicMZ4N6L+4UL4A5wNOgGUby5+T/fe+v7P/jhX/zlj/7q2l//+G9+8rc/ffvvvozC0bDpP2uG3XB4cNSI/G7Q95/FQdz1DwZDv9E76vq/Pzq5S5///tQfRkHYfxqPB/6LXqPTD9pBsxHDrfDtFx+TG6ROjohPOiQgfXJBGqQLVx249kmLTMg1LNGD+zE5hpJtKFMnTRJCuZAM4dsD/PRJBCV+jVc+1J7AkwP47wKB/m9AnQ7UnODTYyx9QQ4RgguMIadX1GE1htiWY6jVgOshlD3jT46wbguoHgNc9o3CHEDpgLyE65f4neLzsT6F6pE1+KvC33qCwwZJUNtEnl6Qx1AjH+68XPqgEIbOpVna8cAgEfEs3a58uIL6Madk20i9lIfOszUr7HN4up5AtOFXoX0Fd54iFXkwWXtUugUOG5VrUE9QEqFVraVKSLznKItrGXs84hyhmixpkBAozi2E8B5Skn5SU57QNoXwJMopy6BT+21ZcRfJk2kWbdka1FtXuJRvc2vgL9K1fq5RWaSrh9zCYu6NpsGs19NxU8reR73KykA8Y9jWjPIRZT6wcF7CT8vA5I/n+TXb7T2FVxcGOzzQJGryNIcZqde5PAKALHiwUyBDMya7vHWMEVz34GoX5aC38yBDu3hyqPC+nLbPR0kdn4m/Nvf+XfildwIurS2EUeOQpIXT7zvwRyXukUfwew8+75E74G3uWHrTrD5LnaC9ZwdqUJ/0MtUrZGvTenXOgRGUq6MG07qUMpM+mPz+NL3nhFudLhNVIrbeUfVa6fKs7fY+ySTVB+jz035sHXyJ7sXMmmb3X9NTkIaz7uTR3CMAyu8q55PdozAaqN4GGBFO42H0ui5Wl8Vn9kB6uaI2nOL36ekX9Vxpl3jsdIsyxXzXdcid66KmO88lrjyOi1LT6Uwdnw+MvjIrIZfSOm92yDPwL4+Rqtn7jcEMNbMRjmf1DIzWzUwJFZ75yUHmic3LmmL7vLK3jS074JIwjb7U8tTTXJA9riWtUnHH2Of4GO2aeTUPhfNHYEUx/qfkZ1CqjtebydVWclUHnRe9vbj3EbkFXlnU9KzlKlDmEyh5Kxfah/Arr6uA+6Pc8uZ7eWOJ+blILfkc++9fA9xTxHabxx/zxF0SbgPpz4N4yPtXnaJHUDNAPWQx/Dc4bq9bPZP003XuR3Va1sh9qDmA6zE82UefoY5HRLk9uBrAH8XhQQzWRyq6cE35E8IdqvN0RNpAeF1uJw2cH/CgxBFaRMBHHbcNOPYhkrwPHKEx6O2EVk9rn+q9iiJiU9yrt39fodjjftODbw20p66RG5LSWiGlB06UilI0pm3l+BdT7Ca0ZZvH9bSna8L9WbQ1D8PECrO45iwyiFC3YvzfhP/5kvjQIAlZ7g5wtaXApnoYoUYy2KxmNuLPckttl44lBj3Ps8fpNN7kiRjlA7C3OIMprW+Ul224v6bBoqM+D9utwkpbLuVUG/uuHpb14E6A/KK+9ghrBPC0X0BDsc6va33m+ly25u4V8m3twKrP7Ek2xtL1W+e6WcND5KSQgPQ5bBa1jP7MHEHKvmE5szOsLRTfwGplDNKWAzZ7GQmn5gBHLZOe57DD3XGAu6OMx8VcLrWlFnGZtxjk8sddGrZxAIV/WKD/95Qe5Zcz1VL7oTwdlPAOse77nAOHvL+x15W9QVGPZ8e0q2Cy1xWY9HZNK1m3nn9W6hYb+c4yErX7SXMsbMYhx7Qhjr46COdNZOzaB9q56hn4ulqxsl3ryo6VZ9Pv/Fg5j/NvYmWXWHlen/YA2kvXJ56Bld4HTA/gaofchTuPyBNczcjOnJhWx9rJmtsaUEnbOIYWbStRNvMkaThD1K4x6kIXozoKS11bvVA44mGP8gigeYSt3rJeRGqLKBkj/44AdqR4ObNPHWdqX5Avyedkwm2ozmnazKEp5JowJGc4CvCRQn1lV22vGtFXl7L6WCzBNaB+0TLrYisZJo/8Ai3tavFZatEOjiR7Sn/LdK6eKfkAe7qvoSdk3mnMy2Xnt+5if0n9mITUTvhJ9fhjnGX8FOuZvZeJLi8FeVkaR+23b6BGpWUZWnlm8AE74P+eYVx6VfVzF1q4j5Jv4N0OSmKM7WgrvdsZ3i+Dgll6k/2FS38/YyHH2HOweZ0IoyiKfXyFdOGGwudawuesFexjNoHQCLbiJyIo8X3E43VWRmhLVDKVH+ZQeZ+PKdhI5Cn8H2C/T0sFOI+XLk916Aug8xSe+hit7PJR6ghaFuOdPeLzmG7EcwDZ/QdcEseoi2WsTamt/EhrZdpiH3LMjDKP03WCmtrD2caOclfKhOWw0M8WeQW1m8itUy43xiFbnCw0doSaSPX5iI8JfCKyIhh36CjpAuqz0RLt6wVF0jKoj9GxBqgrZfHRlp0hecpK0X4n4G3qI213kC42j9uE9kbIi1aGQ00cz0V8JqxLWH7oCD47+LyNORxdXNekEP8V6g9I2/tH/EbLn+E66rsoGzFDMkAOUunR+ZAKyG8CJT5Ai2J+xOd13kWKIoWidiZ+sPlwj8e+FNJQgQS2w+lrYkmWWzxACbaw/apmVdFrjuA5LRElYyDmMbMcEVT0sHScoThOURgCDU20ugjp/Aj87M8grnkXIckVZtESyYUQa8U4mzAbrlvQZ9Q0TEIGbP4izXnpAXWOU41n/BNY3wXOvUt+id6oLF2fJn+YZg/Rno62p6qMf825ytkxKvVPJ8SeG85K7Fify3mb1zV7W+WAx2f058spEzl+DT6aFXNT0+aaMakeOuWclaN37hHc+ZLmA1g8q2Z/ixiexXfv8T9mw3fB3+9CbZqb+RhrPQc/8yKZh5Glxwm0OxA13L9iMaCrLGtLlaWYJzjn658d9OAe+OfNjHz+DX6PeIxznYjxLuuPzq3S3Ed92QP5739HJfphKRKVY3UhrSLZfhckkPbzA/TYwkObe8fs+uGBQ5ZwE3nRhF6ph20+WWAcLXrjdE9szggU/Vg5GYGi/F6Gd/m0mfLTRZ8vIgFTiYnmdZqJV2F/P8f4MsAVtR7Ptxdxel8pacsnLcK+rDbq8x4RlKDW3EX9O0LbotF/G37ZCISNs+lqVlcZG7pR+hVJZxnPJx19fWoL/ZJ4s2GWtyBm4XM25yCfctMbWkWxXlGuw7phFVmu/5rflzK9XaK21p0vi5BqMW1v5J4vd1PbQpyzEN6J0WCb1Vm+7CV84Y0ec4o9nKnzMStPYsuuukZ8XHZT89Z1nJGgvWYL5yfsrX6SrPOvaXzNUreHJUcIe4grFw0icv7oHKSPqxrCm7/+3iqb+XDTwmF5187PRcUoX85pX3abKoPa2XyuyW7zIWUjkOXZcb3QAl30o8wZsqJ9FmzzX7rXsEWpFDYdM425DNN/72kYjhBuEyN1iuUxnyU3QTfBy0IbJbNbad9JqfsCVzV8nP2/izPzLC/mPFktWSPPEm/Yxrsxn+9lbxrK9er5+LS5QtwR6yhu/HnCex8Td1ZjPG+2c2rV+0Z7p3w390hs74Ks/eszpx6pWEY468r+CvPB8RTdK7N9RVTpbyTbKEz3jl8l3BNUl0tFeTGj7mnN7Uvjl+2jLdvkPt239jF5svlM85qrbkN3CnRM9rV6tuW0Op+GwrRodizzRUFuWqLS7K4ZNo5ejmbYqHwIfccRev4stSwWYj2HoH47afu0c+Kz+ohiP8w8ksuMe/n62+WzZyqfBK660nL1adlUVHJ6ORWzC6/KmXl2icht2iVHkq+HN7hcv/0YIrp8z03jxA5fR27iOOt47qhlPeO5GRbVN6xpWunz3nYevGLmw+f5I/NDUy0l+/7h5ffXab/BMpy2iD662DOMHNQynuLnhbRsIw4PdGpL8Qtq7irl+SSZs5KRa5rKDmrC4qmsOVG5KpJk817UUs8Jm32hmVBsxCisZy1loWylcd0YY58n3lzV4HouLn0U74pV9cHnmZ5H+Jfa0qzG5gt3UXPoPOcAZ3rYm8km37hN5P5L55Z9p3Q/IlbbzSt8bhktRVg+sPRNpnfovMS3TYtVZu4M+czBNHjZqO8NB6fl2errbf4MezlyyHJ9HqzumrwcvX09OZjPs3J6xwbOVZrn0/X5/rIkU7wvH4PRT8ZwRdq1fB8lR0OryJ3L4cfrrZF09ugzJYoqYzYgraUm2Awj7WHoW9Vd3sdcLW1eBGddeem9sYWV8s7L6YlfH++8PH683hq5mt75Kmjz6njn5XHTVZPzcIgsdtsOpnrZmrVszahlLnTmv/8jNLsc66dv4rJMGZHJ0OP6yPJ2BGcXPfaZZ4yVz808nHX+NiHLts7mY8sSm8gH890yfLBptsKcVfWQZ4v4mkazuUJ1PWAT284ormhPFz8L4z7bM5v8zDjT6wr692l5/TnymlI3Jnmz3/PIJNvmqlMPoLZL19x5exC5U9dNfjZBBVcYps/OL1dbr57/ydNWN5909fW5TG3ucW3uJdoc494OAa/RRc6w9tBaW1PzeB9GCEPAYOJherX2jR242YH+Xc0bW27fy1bfVHma1uG3SHrd/LsoQ8ojU++7bHmdgjXWUl5MeAopkS0+/s+3SXpPrghXiNgt4GpKk+2mqOYa1LlEl0mHSYvyY7ouH734HB9bmxf6IeW3eLlNZsYyD7fc3ymY3YqEf1O1Y/m2vY/xyyChip3Lp1sus9Ora6XzjnB6pJmKUPS8xnRfZuLtVfaB2V7tMqKOiEeU7DM/mlR7q8VLxR6bMa0ppnI+HZpdtjd5hJ3fisuTNdvryVXWvbksZDqZF8mbvnvXw9r1xG/M6oFn7wmXw/dVsbHV4fn81jLP+SG2/q/4XNTiMyb195doCXHyhoiL9LNR87HkzVvQ1qTnf4qpyZsDyb7VqJ5Fqp9Emj2DNCsZdV1om+Sd3LnlwNWa0o78c0FEjR0i3iMz64XUTht9Bw70HWj0uZ1colMoV/vMPEzTkd8evaw+Cy52SVvn9+VuOXm4a1PgruXirhlxp8+tdqdrZwq6dnLpSp+Js3g5XAb/V43vtvNxs75rkadH6djMVM6DeVHZQts8xmA7CbuccGPncJEUi/hjf+vo8unLzhFtc62iu6DY9j0R7VkeBy+fU+asBlnCwz376AlU8iyTov2eZN6KGmPWV0Q3ymiFmx6tLU3b592Ly6VV5h2NKpfW4gnvxdnuLmK3cdqSx0gNuwrICc5JBoTuOxcu/E0o82gyW3YHT6ZWs5Dyd24ySyXLfZucliEV0frljIEnid6b2+YlvH1V0DKbjulyqfPZADYqFv5RjytfcQk9TTjq0mKRw8JadkHsO3adkPToMi8WSUcr7Fq2IU33SQl0l7en0jy2dRd3UHa1LLc90ezS2J5REsx25YqJKdPzkJj29TvJ9DMmC8jf62U+S59+XaFYSsvh8Akxjwpk5sg0XC3iZB73lmslaqZPZUk2o68NFWt3cSSxSK32kpUlkbXhti9Tmm5R73KoLy+/0UVLXCQ8vVRnsyp36c0usXkou+zcF5cYtAx/vvyW7RF2ZgzLr0u3Ztp8ul2ceXLZ7VTse8bOp5koq9LTw+jjKMZ0Erv+vfwcUDZaqli56JrTtgrcy+a8lGdpcifRl8g1sXuWbd98lXY2mpByG6CPZWstbGdINzimumKOXawC1hVcLjDVmN+lvNwh1jwy/YLDSo8uAoVCMz+3VoifWxof81sq+4Nsm+VzU6vvE3bOF529aJK8DOwdhDsth5jVyr1TRdu8Evh/0wi9ZoE+nbWUY7PtKTFOU958JsUe14xWxg/NeurE3gyQBAVlQ2QaLteL1HN9p9XKRXgmBvt5xg6FNdcNNkr18gXHIPeQOuJcoDw6N/ozYUU2mGymhe3ElIW2bYGYzuQwwVV96OXqK8sWLDPeC/BkE31k6e7JbfkOaitpyUX6BdfzZz3+W8fxFtuT2r2107ZC7iaWp+UqXcv+1XOxqf6xc2C62NaJ8b7JhlS7rF+BNtVWrE2nqfyB1aCpDD63FD5fXmvsvUbx7n7L8iXUk5T3prO7pzTT9/A75g1ljGDWbPnc5h3fmxODPRpy9WWMAjUT6fJoEnYvaFh1jzY9J2qOnChLP+wauEz5u7Z6NeV/muFN9k6Wf6vXN5ct1daSdXl2Chan63b7Err8+sUviz6vnEYJlEPtZC5A7nDDzltnM9EBEZlEHq4ctwk7gzMm7AT6KnxWOYxzhxkKFS/bv/kXxEtOGNb32rmPPKE8kqe+R3iqefaMKznDmIZYlM2fzlZvkfU5+dbXMrAWxbfPSufbZ5fKt4DnaamcmiykbRJWiO8LLlqXZpPvdoI53f47hJ6OQefm2Uy5hztn+fhJTxZhHpCuRJn45HFuf0N8nM+SczSrMHb5DZnkjtTaSR/y+o5XaC8me+RpZnGK+sl5IU/TJ0+PS51PWPU4TLSutjAZzQp5Fhm541otGV2lqN9NUr2F6duskGfRN3dcq6Rvs0fqZfaf4t1it5lKthrDeB4k+2eWQcnqrj6psVAEMTSNeEQkTf86SfQ2zQyoOEHPPdbzwK+a3na960TlcZIbXz6Vrxyo/C2n0nbCrI0imefP4tNqkj+WjlHb8OsD9BhXWn8D9cdQLhv7P4UnbHc5Wu4p1jnGfA+WZzVGLaF6XQatHy+D1pc/fWezuok/XvZii1+886t//sPv/vwf//KH3Zdv/+C/662wOer5/bjZbUTR863NQfziojGMg2bXn1yrjyJ/0GieNDr+c7jsN3p+9OKiGXbD4cS7AXdaXjscwl8/9vCuWuOi0Yuice8ISvYa8XGkP6M3bc8oxIg/TD1r4SONsrj96YuLoD8YxX6/yQhrj7peHHrxeOB7rWDoN+PuGC4azWEAbfOax41hoxn7wxTm563TYBDxVp6zZqawU3LiMOxG6dvNcDQMfFr2ng98HPpfQLknA3/YiMPh+xf1xrDTC/qTC/6ZV6xxzorBZwrFUS/9fRBwPlCSaGtafrveOx4Pjv3+9vXaPe+Gd89vB33fa3jXaRmPPbt+LQXm63jo+5MbVIjNod+Ig37HGzSGke/RB5F32zuO40F0e2PD71fPgpPgKAxPomo47GzQbxuPG0/9g43HUG0URMDY6J27YR8u4hGo1EuEkUY46LQH3RBlKyCfnZ1Vw1N/2PUb7Woz7G3AxbC/0W3E/vnGLi//kgPwGv1WQhMUqEYxPPDPgQX9jo/Vvx75QAGQsbG1eevWJ5sbx+FZJQ4rFE6lPeo38WGlG5z4lfNKe1wZRUB/JQ5OvkHoKXrp3ck17dagAeSj5g+Ow75PFYoqOuiFt0Zp443qVkeA92QcVhvN6qvBxr+3T4YbtDb+6zX61UGr7a2nNHA46votsMBuJwRNPe7VfIpqRAVyHERe0PfujECve2GLs8If+h48iI99rxv0T6TActmawKfi2ffj3549ouZzwT8nF+zD035ueFAWceFzreqTUYx1+adeNeJV2WMp/whEBwYQjhmVSB8X2RnQ6FcSYitBv8Kec8y7w7ADpAaANMD/8HsNf8AXbumeL3vxZa26dau6+Ttwip8T9vMj8k/kOuaif0J+RR6SXfKMNP/4X3/8vz+99afv/+f/fvvDb3/87U9Y0be+x+v8PUn9fPsP/w8t22Qa</latexit>

� = 0.6, A↵ect: Joy, Topic: Technology

<latexit sha1_base64="vjTv8BjqWDP3Z1g3OHRkEYJ85nA=">AADAXXic7T1ddyO3dYiTtOk2TePmIaenPT1z1l4fyWtSEu2NnY2VZL0f2U3WWkXadaQu11uKHFKzIjk0ZyiJ1mHf+p9ykof6P/SlfeoPyC/oWy8ugAEGA8yA5JDiyisdicMZ4N6L+4UL4A5wNOgGUby5+T/feeu73/v+X/31D/7m2t/+8O9+9Pc/fvsfvojC0bDpP2uG3XB4cNSI/G7Q95/FQdz1DwZDv9E76vp/ODq5S5//4dQfRkHYfxqPB/6LXqPTD9pBsxHDrfDtFz8jN0idHBGfdEhA+uSCNEgXrjpw7ZMWmZBrWKIH92NyDCXbUKZOmiSEciEZwrcH+OmTCEr8Bq98qD2BJwfw3wUC/d+AOh2oOcGnx1j6ghwiBBcYQ06vqMNqDLEtx1CrAddDKHvGnxxh3RZQPQa47BuFOYDSAXkJ1y/xO8XnY30K1SNr8FeFv/UEhw2SoLaJPL0gj6FGPtx5ufRBIQydS7O044FBIuJZul35cAX1Y07JtpF6KQ+dZ2tW2OfwdD2BaMOvQvsS7jxFKvJgsvaodAscNirXoJ6gJEKrWkuVkHjPURbXMvZ4xDlCNVnSICFQnFsI4T2kJP2kpjyhbQrhSZRTlkGn9tuy4i6SJ9Ms2rI1qLeucCnf5tbAX6Rr/UKjskhXD7mFxdwbTYNZr6fjppS9j3qVlYF4xrCtGeUjynxg4byEn5aByR/P82u223sKry4MdnigSdTkaQ4zUq9zeQQAWfBgp0CGZkx2eesYI7juwdUuykFv50GGdvHkUOF9OW2fj5I6PhN/be79u/BL7wRcWlsIo8YhSQun33fgj0rcI4/g9x583iN3wNvcsfSmWX2WOkF7zw7UoD7pZapXyNam9eqcAyMoV0cNpnUpZSZ9MPn9aXrPCbc6XSaqRGy9o+q10uVZ2+19kkmqD9Dnp/3YOvgS3YuZNc3uv6anIA1n3cmjuUcAlN9Vzie7R2E0UL0NMCKcxsPodV2sLovP7IH0ckVtOMXv09Mv6rnSLvHY6RZlivmu65A710VNd55LXHkcF6Wm05k6Ph8YfWVWQi6ldd7skGfgXx4jVbP3G4MZamYjHM/qGRitm5kSKjzzk4PME5uXNcX2eWVvG1t2wCVhGn2p5amnuSB7XEtapeKOsc/xMdo182oeCuePwIpi/E/Iz6FUHa83k6ut5KoOOi96e3HvI3ILvLKo6VnLVaDMx1DyVi60D+FXXlcB90e55c338sYS83ORWvI59t+/AbiniO02jz/mibsk3AbSnwfxkPevOkWPoGaAeshi+K9x3F63eibpp+vcj+q0rJH7UHMA12N4so8+Qx2PiHJ7cDWAP4rDgxisj1R04ZryJ4Q7VOfpiLSB8LrcTho4P+BBiSO0iICPOm4bcOxDJHkfOEJj0NsJrZ7WPtV7FUXEprhXb/++QrHH/aYH3xpoT10jNySltUJKD5woFaVoTNvK8S+m2E1oyzaP62lP14T7s2hrHoaJFWZxzVlkEKFuxfi/Cf/zJfGhQRKy3B3gakuBTfUwQo1ksFnNbMSf5ZbaLh1LDHqeZ4/TabzJEzHKB2BvcQZTWt8oL9twf02DRUd9HrZbhZW2XMqpNvZdPSzrwZ0A+UV97RHWCOBpv4CGYp1f1/rM9blszd0r5NvagVWf2ZNsjKXrt851s4aHyEkhAelz2CxqGf2ZOYKUfcNyZmdYWyi+gdXKGKQtB2z2MhJOzQGOWiY9z2GHu+MAd0cZj4u5XGpLLeIybzHI5Y+7NGzjAAr/sED/7yk9yq9mqqX2Q3k6KOEdYt33OQcOeX9jryt7g6Iez45pV8Fkrysw6e2aVrJuPf+s1C028p1lJGr3k+ZY2IxDjmlDHH11EM6byNi1D7Rz1TPwdbViZbvWlR0rz6bf+bFyHuffxMousfK8Pu0BtJeuTzwDK70PmB7A1Q65C3cekSe4mpGdOTGtjrWTNbc1oJK2cQwt2laibOZJ0nCGqF1j1IUuRnUUlrq2eqFwxMMe5RFA8whbvWW9iNQWUTJG/h0B7EjxcmafOs7UviBfkM/IhNtQndO0mUNTyDVhSM5wFOAjhfrKrtpeNaKvLmX1sViCa0D9omXWxVYyTB75JVra1eKz1KIdHEn2lP6W6Vw9U/IB9nRfQU/IvNOYl8vOb93F/pL6MQmpnfCT6vHPcJbxE6xn9l4murwU5GVpHLXfvoEalZZlaOWZwQfsgP97hnHpVdXPXWjhPkq+gXc7KIkxtqOt9G5neL8MCmbpTfYXLv39jIUcY8/B5nUijKIo9vEV0oUbCp9rCZ+zVrCP2QRCI9iKn4igxPcRj9dZGaEtUclUfphD5X0+pmAjkafwf4D9Pi0V4DxeujzVoc+BzlN46mO0sstHqSNoWYx39ojPY7oRzwFk9x9wSRyjLpaxNqW28iOtlWmLfcgxM8o8TtcJamoPZxs7yl0pE5bDQj9b5BXUbiK3TrncGIdscbLQ2BFqItXnIz4m8InIimDcoaOkC6jPRku0rxcUScugPkbHGqCulMVHW3aG5CkrRfudgLepj7TdQbrYPG4T2hshL1oZDjVxPBfxmbAuYfmhI/js4PM25nB0cV2TQvxXqD8gbe8f8Rstf4brqO+ibMQMyQA5SKVH50MqIL8JlPgALYr5EZ/XeRcpihSK2pn4webDPR77UkhDBRLYDqeviSVZbvEAJdjC9quaVUWvOYLntESUjIGYx8xyRFDRw9JxhuI4RWEINDTR6iKk8yPwsz+HuOZdhCRXmEVLJBdCrBXjbMJsuG5Bn1HTMAkZsPmLNOelB9Q5TjWe8U9gfRc49y75FXqjsnR9mvxhmj1Eezranqoy/jXnKmfHqNQ/nRB7bjgrsWN9LudtXtfsbZUDHp/Rny+nTOT4NfhoVsxNTZtrxqR66JRzVo7euUdw50uaD2DxrJr9LWJ4Ft+9x/+YDd8Ff78LtWlu5mOs9Rz8zItkHkaWHifQ7kDUcP+KxYCusqwtVZZinuCcr3920IN74J83M/L5N/g94jHOdSLGu6w/OrdKcx/1ZQ/kv/8tleiHpUhUjtWFtIpk+22QQNrPD9BjCw9t7h2z64cHDlnCTeRFE3qlHrb5ZIFxtOiN0z2xOSNQ9GPlZASK8nsZ3uXTZspPF32+iARMJSaa12kmXoX9/QLjywBX1Ho8317E6X2lpC2ftAj7stqoz3tEUIJacxf17whti0b/bfhlIxA2zqarWV1lbOhG6ZcknWU8n3T09akt9EvizYZZ3oKYhc/ZnIN8yk1vaBXFekW5DuuGVWS5/mt+X8r0donaWne+LEKqxbS9kXu+3E1tC3HOQngnRoNtVmf5spfwhTd6zCn2cKbOx6w8iS276hrxcdlNzVvXcUaC9potnJ+wt/pJss6/pvE1S90elhwh7CGuXDSIyPmjc5A+rmoIb/76e6ts5sNNC4flXTs/FxWjfDGnfdltqgxqZ/O5JrvNh5SNQJZnx/VCC3TRjzJnyIr2WbDNf+lewxalUth0zDTmMkz/vadhOEK4TYzUKZbHfJbcBN0ELwttlMxupX0npe5zXNXwcfb/Ls7Ms7yY82S1ZI08S7xhG+/GfL6XvWko16vn49PmCnFHrKO48ecJ731M3FmN8bzZzqlV7xvtnfLd3COxvQuy9q/PnHqkYhnhrCv7K8wHx1N0r8z2FVGlv5FsozDdO36ZcE9QXS4V5cWMuqc1ty+NX7aPtmyT+3Tf2sfkyeZTzWuuug3dKdAx2dfq2ZbT6nwaCtOi2bHMFwW5aYlKs7tm2Dh6OZpho/Ih9B1H6Pmz1LJYiPUcgvrtpO3TzonP6iOK/TDzSC4z7uXrb5fPnql8ErjqSsvVp2VTUcnp5VTMLrwqZ+bZJSK3aZccSb4e3uBy/fZjiOjyPTeNEzt8HbmJ46zjuaOW9YznZlhU37CmaaXPe9t58IqZD5/nj8wPTbWU7PuHl99fp/0Gy3DaIvroYs8wclDLeIqfF9KyjTg80KktxS+ouauU55NkzkpGrmkqO6gJi6ey5kTlqkiSzXtRSz0nbPaFZkKxEaOwnrWUhbKVxnVjjH2eeHNVg+u5uPRRvCtW1QefZ3oe4V9qS7Mamy/cRc2h85wDnOlhbyabfOM2kfsvnVv2ndL9iFhtN6/wuWW0FGH5wNI3md6h8xLfNi1Wmbkz5DMH0+Blo743HJyWZ6uvt/kz7OXIIcv1ebC6a/Jy9Pb15GA+z8rpHRs4V2meT9fn+8uSTPG+fAxGPxnDFWnX8n2UHA2tIncuhx+vt0bS2aNPlSiqjNmAtJaaYDOMtIehb1V3eR9ztbR5EZx15aX3xhZWyjsvpyd+fbzz8vjxemvkanrnq6DNq+Odl8dNV03OwyGy2G07mOpla9ayNaOWudCZ//6P0OxyrJ++icsyZUQmQ4/rI8vbEZxd9NhnnjFWPjfzcNb524Qs2zqbjy1LbCIfzHfL8MGm2QpzVtVDni3iaxrN5grV9YBNbDujuKI9XfwsjPtsz2zyM+NMryvo36fl9WfIa0rdmOTNfs8jk2ybq049gNouXXPn7UHkTl03+dkEFVxhmD47v1xtvXr+J09b3XzS1dfnMrW5x7W5l2hzjHs7BLxGFznD2kNrbU3N430YIQwBg4mH6dXaN3bgZgf6dzVvbLl9L1t9U+VpWoffIul182+jDCmPTL3vsuV1CtZYS3kx4SmkRLb4+D/fJuk9uSJcIWK3gKspTbabopprUOcSXSYdJi3Kj+m6fPTic3xsbV7oh5Tf4uU2mRnLPNxyf6dgdisS/k3VjuXb9j7GL4OEKnYun265zE6vrpXOO8LpkWYqQtHzGtN9mYm3V9kHZnu1y4g6Ih5Rss/8aFLtrRYvFXtsxrSmmMr5dGh22d7kEXZ+Ky5P1myvJ1dZ9+aykOlkXiRv+u5dD2vXE78xqweevSdcDt9XxcZWh+fzW8s854fY+r/ic1GLz5jU31+iJcTJGyIu0s9GzceSN29BW5Oe/ymmJm8OJPtWo3oWqX4SafYM0qxk1HWhbZJ3cueWA1drSjvyzwURNXaIeI/MrBdSO230HTjQd6DR53ZyiU6hXO0z8zBNR3579LL6LLjYJW2d35e75eThrk2Bu5aLu2bEnT632p2unSno2smlK30mzuLlcBn8XzW+287HzfquRZ4epWMzUzkP5kVlC23zGIPtJOxywo2dw0VSLOKP/a2jy6cvO0e0zbWK7oJi2/dEtGd5HLx8TpmzGmQJD/fsoydQybNMivZ7knkraoxZXxHdKKMVbnq0tjRtn3cvLpdWmXc0qlxaiye8F2e7u4jdxmlLHiM17CogJzgnGRC671y48DehzKPJbNkdPJlazULK37nJLJUs921yWoZUROuXMwaeJHpvbpuX8PZVQctsOqbLpc5nA9ioWPhHPa58xSX0NOGoS4tFDgtr2QWx79h1QtKjy7xYJB2tsGvZhjTdJyXQXd6eSvPY1l3cQdnVstz2RLNLY3tGSTDblSsmpkzPQ2La1+8k08+YLCB/r5f5LH36dYViKS2HwyfEPCqQmSPTcLWIk3ncW66VqJk+lSXZjL42VKzdxZHEIrXaS1aWRNaG275MabpFvcuhvrz8RhctcZHw9FKdzarcpTe7xOah7LJzX1xi0DL8+fJbtkfYmTEsvy7dmmnz6XZx5sllt1Ox7xk7n2airEpPD6OPoxjTSez69/JzQNloqWLlomtO2ypwL5vzUp6lyZ1EXyLXxO5Ztn3zVdrZaELKbYA+lq21sJ0h3eCY6oo5drEKWFdwucBUY36X8nKHWPPI9HMOKz26CBQKzfzcWiF+bml8zG+p7A+ybZbPTa2+T9g5X3T2oknyMrB3EO60HGJWK/dOFW3zSuD/TSP0mgX6dNZSjs22p8Q4TXnzmRR7XDNaGT8066kTezNAEhSUDZFpuFwvUs/1nVYrF+GZGOznGTsU1lw32CjVyxccg9xD6ohzgfLo3OjPhBXZYLKZFrYTUxbatgViOpPDBFf1oZerryxbsMx4L8CTTfSRpbsnt+U7qK2kJRfpF1zPn/X4bx3HW2xPavfWTtsKuZtYnpardC37V8/FpvrHzoHpYlsnxvsmG1Ltsn4F2lRbsTadpvIHVoOmMvjcUvh8ea2x9xrFu/sty5dQT1Lem87untJM38NvmTeUMYJZs+Vzm3d8b04M9mjI1ZcxCtRMpMujSdi9oGHVPdr0nKg5cqIs/bBr4DLl79rq1ZT/aYY32TtZ/q1e31y2VFtL1uXZKVicrtvtS+jy6xe/LPq8cholUA61k7kAucMNO2+dzUQHRGQSebhy3CbsDM6YsBPoq/BZ5TDOHWYoVLxs/+ZfEi85YVjfa+c+8oTySJ76HuGp5tkzruQMYxpiUTZ/Olu9Rdbn5Ftfy8BaFN8+LZ1vn14q3wKep6VyarKQtklYIb4vuGhdmk2+2wnmdPvvEHo6Bp2bZzPlHu6c5eMnPVmEeUC6EmXik8e5/TXxcT5LztGswtjlt2SSO1JrJ33I6zteob2Y7JGnmcUp6ifnhTxNnzw9LnU+YdXjMNG62sJkNCvkWWTkjmu1ZHSVon43SfUWpm+zQp5F39xxrZK+zR6pl9l/ineL3WYq2WoM43mQ7J9ZBiWru/qkxkIRxNA04hGRNP3rJNHbNDOg4gQ991jPA79qetv1rhOVx0lufPlUvnKg8necStsJszaKZJ4/i0+rSf5YOkZtw68P0GNcaf0t1B9DuWzs/xSesN3laLmnWOcY8z1YntUYtYTq9by0bi2L1pc/fmezuok/XvZii1+88+t//uPv//If//LH3Zdvf++/662wOer5/bjZbUTR863NQfziojGMg2bXn1yrjyJ/0GieNDr+c7jsN3p+9OKiGXbD4cS7AXdaXjscwl8/9vCuWuOi0Yuice8ISvYa8XGkP6M3bc8oxIg/TD1r4SONsrj9yYuLoD8YxX6/yQhrj7peHHrxeOB7rWDoN+PuGC4azWEAbfOax41hoxn7wxTm563TYBDxVp6zZqawU3LiMOxG6dvNcDQMfFr2ng98HPqfQ7knA3/YiMPh+xf1xrDTC/qTC/6ZV6xxzorBZwrFUS/9fRBwPlCSaGtafrveOx4Pjv3+9vXaPe+Gd89vB33fa3jXaRmPPbt+LQXmq3jo+5MbVIjNod+Ig37HGzSGke/RB5F32zuO40F0e2PD71fPgpPgKAxPomo47GzQbxuPG0/9g43HUG0URMDY6J27YR8u4hGo1EuEkUY46LQH3RBlKyCfnZ1Vw1N/2PUb7Woz7G3AxbC/0W3E/vnGLi//kgPwGv1WQhMUqEYxPPDPgQX9jo/Vvxr5QAGQsbG1eevWx5sbx+FZJQ4rFE6lPeo38WGlG5z4lfNKe1wZRUB/JQ5OvkboKXrp3ck17dagAeSj5g+Ow75PFYoqOuiFt0Zp443qVkeA92QcVhvN6qvBxr+3T4YbtDb+6zX61UGr7a2nNHA46votsMBuJwRNPe7VfIpqRAVyHERe0PfujECve2GLs8If+h48iI99rxv0T6TActmawKfi2ffj3509ouZzwT8nF+zD035ueFAWceFzreqTUYx1+adeNeJV2WMp/whEBwYQjhmVSB8X2RnQ6FcSYitBv8Kec8y7w7ADpAaANMD/8HsNf8AXbumeL3vxRa26dau6+Xtwip8R9vMD8k/kOuaif0x+TR6SXfKMNP/0X3/6vz+/9efv/uf/fvP9b374zY9Y0be+w+v8hKR+vvnp/wMQNGQV</latexit>

� = 1.0, A↵ect: Joy, Topic: Technology

Generated Output

Figure 1: Architecture of the proposed model. Our model generates sentences when it receives a Prompt, Topic,Emotion category and emotion intensity value from the user. Emotion Wheel image taken from wikipedia: https://en.wikipedia.org/wiki/Emotion_classification

and anticipation–surprise. In general, there are no clear boundaries proven between emotion categories,and it’s perceived subjectively across individuals.

Affective text generation models have been utilized in various applications. Mahamood and Reiter(2011) present the application of affective NLG models to produce medical information summaries.These reports are communicated to the parents of babies in a Neonatal Intensive Care Unit, and needto have the appropriate affective tone so that they are able to deal with the emotional impact ofthe information. Mairesse and Walker (2007) present a system for language generation tailored onextroversion dimensions, which can be used for generating dialogues. An affect-driven dialogue systemfor generating emotional responses in a controlled manner has been designed by Colombo et al. (2019),that is aimed at making conversational systems more social. A significant number of research papers alsodeal with affective language models for chatbots in a commercial or therapy setting, such as the retrieval-based dialogue system by Bartl and Spanakis (2017). Chan and Lui (2018) describe a sequence-to-sequence based emotional response generator for chatbots which are used in customer services, personalassistance and education.

3 Affective Model3.1 Background

Our model is based on the GPT-2 (Radford et al., 2019) text generation model and the Plug and PlayLanguage Model (PPLM) (Dathathri et al., 2020). GPT-2 is a transformer-based language model whichhas shown superior performance on several NLP tasks, including text generation. The GPT-2 modelgenerates text that is word by word conditioned on past context represented in the form of a historyembedding matrix Ht. The model updates the history matrix recursively and samples the next word as,

Ot+1, Ht+1 = LM(st, Ht) (1)

st+1 ∼ pt+1 = Softmax (W ·Ot+1) (2)where Ot+1 is the token embedding used to get the next word’s probability distribution pt+1 by learninga parameter W , and st+1 represents the sampled word at (t+ 1)th iteration.

In order to efficiently update the model, as well as to generate controlled text we bring in the ideaof alternating optimization or a projected gradient descent in the direction which optimizes the attributecategory probability distribution pt+1(a|w) projected on sentence probability distribution pt+1(w) asdone in PPLM. Here, pt+1(a|w) is the probability that (t + 1)th word belongs to affect category a, andpt+1(w) is the probability that (t + 1)th word is a grammatically correct occurrence. PPLM allows toplug in any text generation model to optimize loss associated with pt+1(w). We use GPT-2 Mediumsince it has performed well for domain-specific text generation without the need to use domain-specifictraining datasets. PPLM then perturbs this pt+1(w) (by changing the history matrix Ht) given by theplugged-in model, such that the generated text has a higher probability of belonging to a given attribute

2790

or topic, where the topics are represented by Bag of Words (BoW). The following Gradient Descent(GD) is performed to execute this perturbation,

H′t = Ht − η

∂Loss∂Ht

(3)

such that we get the perturbed next word distribution p′t+1 as,

O′t+1, Ht+1 = LM

(st, H

′t

)(4)

st+1 ∼ p′t+1 = Softmax

(W ·O′t+1

)(5)

The Loss in Equation 3 has two terms: a KL-Divergence term, which keeps the perturbed next wordprobability distribution close to the actual one, hence ensuring grammatical correctness; and a lossassociated with overall probability of words belonging to the given attribute.

Loss = KLDperturbed−unperturbed + Losstopic (6)

Losstopic = − log(∑

(BoWprobs)) (7)

In Equation 7, the∑BoWprobs at time step t is defined as

∑i pt ·hi, where hi is a one-hot encoding

of the ith word from the topic’s bag of words. These two loss components, when used for two sequentialsteps of GD, relate to an alternating optimization technique. Combining these two losses to perform asingle step of GD has a similar effect, hence it is used to minimize the loss by perturbing Ht.

3.2 Proposed Model

The architecture of our model is shown in Figure 1. The idea behind our model is similar to the aboveattribute perturbation in the sentences. We define a new loss term for the perturbation which steers thegeneration towards the required affect. It also provides an option to control the intensity of the emotionin the generated sentences. The new loss function is defined as:

Loss = KLD Loss + Losstopic + Lossaffect (8)

Lossaffect = − log((BoWprobs) · (N (affectInt, knob, var))) (9)In Equation 9, theN (affectInt, knob, var) is a Gaussian function, which is used to control the intensity ofthe affective text generation. Here ’affectInt’ represents the intensity values for the BoWs correspondingto the emotion category, ranging from 0 to 1, where 1 is the maximum intensity. The knob (the mean ofthe Gaussian) scales-up the values for the words closer to it, and scales it down for those far away fromthe mean, hence increasing the probability of the words with intensity values closer to the knob value.The var provides with flexibility on the intensity range to be picked. The dot product of the scaledintensity scores with the BoW probabilities is to be maximized during the optimization. This new loss inEquation 8 is then used to perturb the model history Ht, to increase the probability for words at (t+1)thiteration corresponding to the given emotion category and the intensity.

To incorporate affect, we use human-annotated affect information provided by NRC, which fulfils bothBoWs - emotion categories and emotion intensities. The NRC Emotion Intensity Lexicon (Mohammad,2018) provides real-valued intensity scores for approximately 10,000 words belonging to eight basicemotions (anger, fear, sadness, joy, anticipation, trust, surprise, trust). The intensity values range from 0to 1, 1 being highly belonging to the category and 0 being neutral of the category. This scale defines therange for our knob in the loss defined in Equation 9.

We perturb the history as in Equation 3, and feed it to the LM as in Equation 4. During theimplementation, we perform 3 iterations of these two steps to accumulate enough perturbation in Ht

such that the loss is desirably reduced, and this is finally used to sample the next word as shown inEquation 5. The convergence rate was set to η = 0.005.

Our model can be used to generate affective sentences by choosing the desired emotion category.We support a varied range of 8 basic human emotions: joy, anger, fear, sadness, surprise, anticipation,disgust, and trust. Moreover, the user can select emotion intensity on a continuous scale of 0 to 1;where 0 stands for a neutral sentence and 1 for the highest possible intensity sentence (with grammaticalcorrectness remaining intact) within the given category. The affect words are more likely to get sampled,which are closer to the given knob value. Some sentences generated by our model can be seen in Table

2791

Prompt Topic Emotion Knob (β) TextOur Model

ThePresident

Politics Anger 0.6 The President’s first budget is a huge tax cut for the rich and anincrease in debt for our military. He’s a disaster. — Senator BernieSanders (@SenSanders) December 17, 2017..

ThePresident

Politics Anger 1.0 The President’s first budget is a huge tax cut for the rich and anincrease in debt for our military. He’s a war criminal who wants todestroy our economy and take away our freedom.

Once upon Technology Joy 0.6 Once upon a time, there was a game that was so great, so much funyou could play it over and over again.

Once upon Technology Joy 1.0 Once upon a time, there was a game that was so fun, so much fun... Imean, so much FUN. I’m not exaggerating when I say that it was thebest game in the world!

I feel so No Topic Sadness 0.6 I feel so bad for the kid that was killed. The guy was a great guy. Hewas the kind of guy that you’d want to hang out with.

I feel so No Topic Sadness 1.0 I feel so sad for the victims of the shooting. The news has beenterrible, but we have to deal with it. I’m sorry that the shootinghappened.AffectLM

ThePresident

NA Anger 3 the president of the country you know . i i i don’t know i just hopethat it ’s just . it ’s just going to kill innocent people . i i i feel likeit ’s ridiculous for the rest of the united states especially in the unitedstates

ThePresident

NA Anger 5 the president of the country the fight against the united states hate theus and mass destruction attack and fight against killing you hate hatehate hate hate

Once upon NA Positive 3 once upon a year . i don’t . oh . i think i i think i think ah i ’m not sureit ’s like it ’s the same way i ’ve seen it on the news .

Once upon NA Positive 5 once upon a time i just . i just do a lot of things with my family but it’s yeah it is my best friend is a . that ’s not a good friend . yeah . yeahit it wouldn’t be a good thing to do .

I feel so NA Sadness 3 i feel so sad that you know emotional feelings are losing you know .oh yeah . as a result of . you know . of course . you know . you know

I feel so NA Sadness 5 i feel so sad because i ’ve lost feelings lost feelings about losingfeelings and feelings and feelings lose feelings feelings and feelings

Table 1: Examples of sentences generated by our model and AffectLM for different emotion categories, intensities and topics.More examples in appendix A and appendix B.

1. These examples illustrate that our model is able to incorporate different emotion categories and retainthe grammatical correctness in all sentences. We can also observe that the intensity conveyed by thesentences for the chosen emotion increases as we increase the knob value.

4 Model Evaluation and ExperimentsWe evaluated our model using complementary evaluation approaches: Automated evaluation and Humanevaluation. Several experiments were conducted based on these evaluations, as described next.

4.1 Automated Evaluations

We quantitatively evaluate our model for perceived emotion intensity and grammatical correctness ofthe generated text. To evaluate the emotional intensity of a sentence, we used a pre-trained emotionprediction model, the Affective Norms model1 (Koper et al., 2017). Given a sentence and its emotion(anger/fear/sadness/joy), the model evaluates the intensity felt by the speaker on a scale of 0 to 1. It isessentially a Random Forrest Regressor, trained on manually labelled tweets, which we have adaptedfor our generated sentences. To evaluate the grammaticality of the generated text, we used perplexity.We compared the text generated by the models against GPT (Radford, 2018) as the ground truth, thisapproach is similar to the one used for evaluating PPLM (Dathathri et al., 2020).

4.2 Human Evaluations

We created 3 different tasks for human evaluations comprising over 400 sentences. Two graduate studentsfluent in English (and oblivious to this project) performed these tasks and evaluated our model and othercompetitor models. Task 1 and 2 were about sentence emotion classification with 4 (positive emotion,

1Affective Norms model (IMS EmoInt) was the second best performing model in the IMS Emotional Intensity Predictiontask organized at WASSA workshop 2017

2792

anger, sadness, neutral) and 8 (anger, fear, sadness, joy, anticipation, trust, surprise, trust) emotion classesrespectively. Task 3 was aimed to study the grammatical correctness and perceived intensity.Task 1 The annotators were presented with 72 text snippets (consisting of at most 2 sentences) equallydistributed over each of the 3 classes (Sad, Angry, Positive Emotion) and generated by two models(AffectLM and our model). Association between a text snippet and its generator (model) were unknownto the annotators. The sentences uniformly belonged 3 intensity values (β = 1, 2, 3 for AffectLM andknob = 0.4, 0.6, 1 for our model). We gave ‘positive emotion’ as one of the options, for a fair comparisonbetween both the models since it is covered by both models2. We also gave a ‘neutral’ option, in caseannotators felt that there is no emotion intended by the text, although all sentences were conditioned onone emotion category. The prediction accuracy was averaged over both the annotators.Task 2 The annotators were given 96 text snippets generated by our model, equally distributed overeach of the 8 emotion classes and 3 intensity values (knob = 0.4, 0.6, 1). The prediction accuracy wasaveraged over both the annotators.Task 3 The annotators were given 39 sets of sentences. Each set comprised of 6 sentences, and theannotators were informed about the input emotion category of each sentence. The annotators were askedto rate grammatical correctness for each sentence on a 7 point Likert scale. Moreover, they were askedto rank these sentence from 1 to 6 based on the relative intensity of the emotion expressed, 6 being thehighest intensity rank. We curated 3 sets for each of the 3 AffectLM emotion categories (Sad, Angry,Positive Emotion), 2 sentiment categories (Positive, Negative) from PPLM, and 8 emotion categoriesfrom our model. The ratings from both the annotators were averaged, each for grammatical correctnessratings and intensity ranks.

We tested our model’s performance in terms of its ability to generate text for a given emotion, theeffectiveness towards adapting to emotional intensity and the impact of changing these two parameterson the grammatical correctness of the generated text. These experiments are described in the next section.

0.0 0.2 0.4 0.6 0.8 1.0(a)

Emotion Intensity (Knob)

1

2

3

4

5

6

Emot

ion

Inte

nsity

Ran

king

(Hum

an E

valu

atio

n)

Our ModelAngerSadnessJoyFearAnticipation

DisgustSurpriseTrustAvg. of 8 emotions

0 1 2 3 4 5(b)

Emotion Intensity (beta)

1

2

3

4

5

6Affect LM

AngerSadnessPositive EmotionAvg. of 3 emotions

0.0 0.2 0.4 0.6 0.8 1.0(c)

Sentiment Intensity (Stepsize)

1

2

3

4

5

6PPLM

Negative SentimentPositive SentimentAvg. of 2 sentiments

Figure 2: Human Perceived Intensity Evaluations

4.3 Comparison Experiments

We compared our model’s performance with AffectLM as the baseline model, on 3 common emotionclasses: Sad, Angry, Joy (our model)/Positive Emotion (AffectLM).True emotion vs predicted emotion: This experiment was conducted via human evaluations.Annotations from Task 1 were used for this experiment. Table 2 shows the results.Intensity and grammatical correctness trend for generated text with varying knob value:• Automated Evaluations: Figure 3(a), (b) shows the results for impact on perplexity (measure for

grammatical correctness). Figure 4 shows perceived intensity with changing model intensity.• Human Evaluations: Annotations from Task 3 for three common emotion classes of AffectLM and

our model were used for this experiment. Figure 2(a), (b) shows the comparison for perceivedintensity and 3(d), (e) shows the comparison for grammatical correctness.

2the ‘joy’ category of our model is a subset of positive emotion class

2793

We compared our model with PPLM (Dathathri et al., 2020) on text quality with increasing intensityacross our 8 emotion classes, and 2 sentiment classes (present in PPLM).Intensity and grammatical correctness trend for generated text with varying knob value:• Automated Evaluations: Figure 3(a), (c) shows the results for impact on perplexity (measure for

grammatical correctness) with changing model intensity.• Human Evaluations: Annotations from Task 3 for 8 emotion classes of our model and 2 sentiment

classes from PPLM were used for this experiment. Figure 2(a), (c) shows the comparison forperceived intensity and 3(d), (f) shows that for grammatical correctness.

0.0 0.2 0.4 0.6 0.8 1.0(a)

10

20

30

40

50

60

Perp

lexi

ty (A

utom

ated

Eva

luat

ion)

(low

er is

bet

ter)

Our ModelAvg. for 3 emotionsAvg. for 8 emotions

0 1 2 3 4 5(b)

10

20

30

40

50

60

Affect LMAvg. for 3 emotions

0.0 0.2 0.4 0.6 0.8 1.0(c)

10

20

30

40

50

60

PPLM

Avg. for 2 sentiments

0.0 0.2 0.4 0.6 0.8 1.0(d)

Emotion Intensity (Knob)

1

2

3

4

5

6

7

Gram

mat

ical C

orre

ctne

ss (H

uman

Eva

luat

ion)

(hig

her i

s bet

ter)

AngerSadnessJoyFear

AnticipationDisgustSurpriseTrust

0 1 2 3 4 5(e)

Emotion Intensity (beta)

1

2

3

4

5

6

7AngerSadnessPositive Emotion

0.0 0.2 0.4 0.6 0.8 1.0(f)

Sentiment Intensity (Stepsize)

1

2

3

4

5

6

7Negative SentimentPositive Sentiment

Figure 3: Grammatical Correctness Evaluations

4.4 Independent Experiments

We evaluated our model independently for 8 fine-grained emotion classes.True emotion class vs predicted emotion: This experiment was conducted via human evaluations.Annotations from Task 2 were used for this experiment. Table 2 shows the results.Intensity and grammatical correctness trend for generated text with varying knob value:• Automated Evaluations: Figure 3(a) shows the results for impact on perplexity (measure for

grammatical correctness) with changing model intensity.• Human Evaluations: Annotations from Task 3 for 8 emotion classes of our model were used for this

experiment. Figure 2(a) show the results for perceived intensity and 3(d) show that for grammaticalcorrectness.

0.0 0.2 0.4 0.6 0.8 1.0(a)

Emotion Intensity (Knob)

0.40

0.42

0.44

0.46

0.48

0.50

Emot

ion

Inte

nsity

Rat

ing

(Aut

omat

ed E

valu

atio

n)

Our ModelAngerSadnessJoy

0 1 2 3 4 5(b)

Emotion Intensity (beta)

0.40

0.45

0.50

0.55

0.60

0.65

0.70Affect LM

AngerSadnessPositive Emotion

Figure 4: Automated Perceived Intensity Evaluations

β (Knob) Emotion Prediction AccuracyAffect LM

(3 emotions)1 0.3922 0.5423 0.542

Our Model(3 emotions) (8 emotions)

0.4 0.833 0.3590.6 0.625 0.3441.0 0.625 0.281

Table 2: Human Evaluations on Classification Tasks(Task 1 and Task 2)

5 AnalysisWe conducted 26 ANOVA analysis with human-annotated predicted emotion intensity and grammaticalcorrectness ratings as Dependent Variables (DVs), and knob value as Independent Variable (IV). The

2794

inter-agreement score was calculated among the two annotators for each of the tasks using Krippendorff’sα. For Task 1 and Task 2, we observed moderate agreement (α = 0.54 and α = 0.42 respectively), giventhe subjective nature of emotion perception. For Task 3, we observed great agreement (α = 0.74) ongrammatical correctness ratings, while we got α = 0.23 on intensity rankings, showing the diversity inperceived emotion intensities. This is due to the subtleties in perceived intensity and diversity observedin the generated text.

5.1 Human Annotations for Classification

From Table 2, we can see that our model achieves best classification scores at knob = 0.4. In case of8 emotions, this might be happening because across all the eight emotions, in Figure 2(a), the perceivedintensity always increases at this value (notice that for Fear and Surprise, there’s a peak at 0.4). In caseof 3 emotions, there’s very little difference between perceived intensity values at 0.4 and 0.6 for Sadnessand Joy, hence the scores are opposite. Overall, we see that at knob = 1.0, there’s a decrease in perceivedintensity, hence the classification score drops. When compared to AffectLM for 3 emotions, our modelseems to reflect emotions better for all the intensity values. We see that for 8 emotion classes, we get lowscores yet higher than random predictions across 8 classes. The reason could be a close association of theemotion classes (such as Sadness-Disgust, Anger-Disgust, Joy-Surprise), leading to misclassification.

5.2 Grammatical Correctness Evaluations

The ANOVA results for grammatical correctness annotations (p < 0.65 for all emotion categories)clearly show that the model maintains the grammatical correctness on increasing the intensity values.From Figure 3(a), (d) it’s evident that predicted perplexity (a lower value implies better) and human-annotated grammatical correctness (a higher value implies better) both remain constant at decently lowand high values respectively in comparison to both the baselines. We received comments from theannotators where they conveyed that sentences from one model lack an overall structure and do notconvey any coherent meaning (even though the texts were from a conversational context). Hence, theannotators were not convinced that those sentences were grammatically correct, as can be seen fromthe averaged annotations in Figure 3(e). They consisted of ‘emotion’ words but they did not cohere toform meaningful sentences. The annotators were oblivious to the generating model, but the difference ingeneration was quite evident. Moreover, AffectLM doesn’t train the model to be grammatically correct,rather it increases the probability of words directly indicative of emotions. As a result, the perturbationin grammatically optimum next word probability distribution gets destroyed with their modelling. Whenwe compare our model’s results with PPLM, we observe that fluency scores in Figure 3(c),(f) start fromsimilar values as ours at a low stepsize, but eventually blows up. This is because using an optimalityparameter for controlling intensity is not efficient or user-friendly. We instead use the most optimumconvergence rate and let another loss term handle the intensity factor.

5.3 Perceived Intensity Evaluations

Our Model:Anger: The ANOVA test results show that our hypothesis is significant at p < 0.317. From Figure

2(a), we can see that predicted intensity for Anger increases till knob = 0.6 and then decreases but notbelow the value at knob = 0.4. This might be happening due to subtle changes observed with a ‘0.2’increase in knob value for intensity.

Sadness: The ANOVA test results show that our hypothesis is significant at p < 0.010. From Figure2(a), we can see that predicted intensity for Sadness increases with increasing intensity, except a slightdecrease at knob = 1.0 which falls to a similar value as that at knob = 0.6. This can be attributed to thesubtle difference in the emotional intensity.

Joy The ANOVA test results show that our hypothesis is significant at p < 0.048. From Figure 2(a),we can see that predicted intensity for Joy increases except that it assumes nearly similar values at last 3intensities.

Fear The ANOVA test results show that our hypothesis is significant at p < 0.639. From Figure 2(a),we can see that predicted intensity for Fear is not changing much across the intensity scale, hence the

2795

high p-value. The model tries to increase the intensity as much as possible for a given prompt along withconstraining it to the grammatically correct region. It seems that model sacrifices on increasing emotionintensity to keep the grammar intact, which is evident for Fear in Figure 3(d).

Anticipation The ANOVA test results show that our hypothesis is significant at p < 0.203. FromFigure 2(a), we can see that predicted intensity for Anticipation increases till knob = 0.6, then decreasesfor knob = 0.8 and increases again further. Even though the differences were subtle and perceiving thisemotion was more challenging as compared to the rest of the emotions, we received fairly good resultsby the annotators.

Disgust The ANOVA test results show that our hypothesis is significant at p < 0.0001. From Figure2(a), we can see that predicted intensity for Disgust has been perceived very well till knob = 0.8, whichis reflected in the significance test as well. The inserted knob was able to manipulate this emotion quiteaccurately.

Surprise The ANOVA test results show that our hypothesis is significant at p < 0.188. From Figure2(a), we observed that the predicted intensity for Surprise didn’t seem to manipulate very accurately.The reason behind these results can be attributed to balancing the trade-off between preserving thegrammaticality and increasing the emotional intensity, as well as the differences being too subtle tobe noticed by a human annotator.

Trust The ANOVA test results show that our hypothesis is significant at p < 0.067. From Figure 2(a),we can see that predicted intensity for Trust seem to have consistently increased with increasing knobvalue on an overall basis, hence we get significant results for this category.Affect LM: The ANOVA test results for AffectLM classes were significant (Anger: p < 0.002, Sadness:p < 0.0002, Positive Emotion: p < 0.017), but if we take into account the grammatical correctnessratings, the overall generation quality is not optimal. The reason behind a better intensity evaluation couldbe the repetitive use of the same emotion words (Table 1), which is not desirable for the applications inSection 1.PPLM: The ANOVA test results for PPLM classes were not as significant (Negative: p < 0.352,Positive: p < 0.415). From Figure 2(c), it’s evident that the intensity is not well perceived, with abreakdown observed at high stepsize where random characters and words were generated.

5.3.1 Key Observations

From the aforementioned results, we can conclude that our model is able to manipulate the emotionalintensity of the generated text explicitly for certain emotions, and subtly for the remaining emotions. Italso ensures that the grammatical correctness is not compromised when incorporating the given emotionand intensity. From Figure 2(a), we can see that the average perceived intensity across all emotionsincreases, and then decreases slightly towards the end. In the automated evaluations in Figure 4(b) for3 emotions, we see that AffectLM has a consistent rise in intensity, but the emotion evaluation modeldidn’t account for grammatical correctness and assigned a high score to repeated emotion words. Forour model, the automated evaluation (Figure 4(a)) shows a similar pattern as seen in human evaluation(Figure 2(a)). We observe a small increase in the intensity rating for the initial values of the knob, aswell as consistency in generating sentences which are grammatically valid and convey useful meaning(as can be seen from perplexity trend in Figure 3(a) - averaged for 3 emotions).

6 ConclusionIn this paper, we present a novel model for affective text generation. The model augments SOTA textgeneration models with affective information without compromising on the grammatical correctness.As shown empirically, our model is able to successfully incorporate the chosen emotion category andintensity, as well as the topic and does not fall out on grammatical correctness (unlike other models) athigh intensities. It also provides the flexibility for generating sentences in 8 emotion categories, whichis absent in all the existing models in related domains. Our model has wide applications in fields likedialogue generation, therapy chatbots, story and advertisement completion.

2796

ReferencesPrithviraj Ammanabrolu, Wesley Cheung, Dan Tu, William Broniec, and Mark O. Riedl. 2020. Bringing stories

alive: Generating interactive fiction worlds.

Nabiha Asghar, Pascal Poupart, Jesse Hoey, Xin Jiang, and Lili Mou. 2018. Affective neural response generation.In Lecture Notes in Computer Science, pages 154–166. Springer International Publishing.

A. Bartl and G. Spanakis. 2017. A retrieval-based dialogue system utilizing utterance and context embeddings. In2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 1120–1125,United States, December. IEEE.

Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata,Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experiencegrounds language.

Yin Hei Chan and Andrew Kwok Fai Lui. 2018. Encoding emotional information for sequence-to-sequenceresponse generation. In 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD). IEEE,May.

Pierre Colombo, Wojciech Witon, Ashutosh Modi, James Kennedy, and Mubbasir Kapadia. 2019. Affect-driven dialog generation. In Proceedings of the 2019 Conference of the North. Association for ComputationalLinguistics.

Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, andRosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. InInternational Conference on Learning Representations.

Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-LM:A neural language model for customizable affective text generation. In Proceedings of the 55th Annual Meetingof the Association for Computational Linguistics (Volume 1: Long Papers), pages 634–642, Vancouver, Canada,July. Association for Computational Linguistics.

Sepehr Janghorbani, Ashutosh Modi, Jakob Buhmann, and Mubbasir Kapadia. 2019. Domain authoring assistantfor intelligent virtual agent. In Proceedings of the 18th International Conference on Autonomous Agents andMultiAgent Systems, AAMAS ’19, page 104–112, Richland, SC. International Foundation for AutonomousAgents and Multiagent Systems.

Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL - AConditional Transformer Language Model for Controllable Generation. arXiv preprint arXiv:1909.05858.

Maximilian Koper, Evgeny Kim, and Roman Klinger. 2017. IMS at EmoInt-2017: Emotion intensity predictionwith affective norms, automatically extended resources and deep learning. In Proceedings of the 8th Workshopon Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, Copenhagen, Denmark.Workshop at Conference on Empirical Methods in Natural Language Processing, Association for ComputationalLinguistics.

Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefenstette, ShimonWhiteson, and Tim Rocktaschel. 2019. A survey of reinforcement learning informed by natural language.

Saad Mahamood and Ehud Reiter. 2011. Generating affective natural language for parents of neonatal infants. InProceedings of the 13th European Workshop on Natural Language Generation, pages 12–21, Nancy, France,September. Association for Computational Linguistics.

Francois Mairesse and Marilyn Walker. 2007. PERSONAGE: Personality generation for dialogue. In Proceedingsof the 45th Annual Meeting of the Association of Computational Linguistics, pages 496–503, Prague, CzechRepublic, June. Association for Computational Linguistics.

Raymond A. Mar, Keith Oatley, Maja Djikic, and Justin Mullin. 2011. Emotion and narrative fiction: Interactiveinfluences before, during, and after reading. Cognition & Emotion, 25(5):818–833, August.

Saif M. Mohammad. 2018. Word affect intensities. In Proceedings of the 11th Edition of the Language Resourcesand Evaluation Conference (LREC-2018), Miyazaki, Japan.

Nora Moran and Rajesh Bagchi. 2019. The power of emotional benefits: Examining the role of benefit focus ondonation behavior. Journal of Advertising, 48(3):284–291, May.

2797

R. Plutchik. 1962. The Emotions: Facts, Theories, and a New Model. Doubleday papers in psychology. RandomHouse.

Robert Plutchik. 1980. Chapter 1 - a general psychoevolutionary theory of emotion. In Robert Plutchik and HenryKellerman, editors, Theories of Emotion, pages 3 – 33. Academic Press.

R. Plutchik. 1994. The psychology and biology of emotion.

Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language modelsare unsupervised multitask learners.

Alec Radford. 2018. Improving language understanding by generative pre-training.

Timo Spring, Jacky Casas, Karl Daher, Elena Mugellini, and Omar Abou Khaled. 2019. Empathic responsegeneration in chatbots. Proceedings of 4th Swiss Text Analytics Conference (SwissText 2019), 18-19 June 2019,Wintherthur, Switzerland, (CONFERENCE):10 p.

Theodore R. Sumers, Mark K. Ho, Robert D. Hawkins, Karthik Narasimhan, and Thomas L. Griffiths. 2020.Learning rewards from linguistic feedback.

2798

AppendixA Examples Generated by our Model

Prompt Topic Emotion Knob TextThere exists Science Fear 0 There exists a vast, vast universe of material

that is known as the material record, orMDR, that we can study and understand.

There exists Science Fear 0.4 There exists a theory of history that claimsthe United States, the most modern societyin its history, was created by a race of peoplegenetically altered in the laboratory.

There exists Science Fear 0.6 There exists a very good argument that theworld is going to end, or is about to end. It’sa frightening idea, and I’ve always felt thatwe are all doomed.

There exists Science Fear 1.0 There exists a very good argument thatthe US has been using chemical weaponsagainst the population since the 1970s. Inthe case of chemical weapons, the evidenceis quite overwhelming.

Once upon Religion Joy 0 Once upon a time, there were two types ofpeople in America: those who were borninto wealth, and those who were born intopoverty.

Once upon Religion Joy 0.4 Once upon a time, a man named David hada dream. His dream was to be an inventor.The dream had a simple explanation. It said,”I want this.”

To conclude Politics Anger 0 To conclude, the evidence is clear that theUS has the ability to make its own decisionsregarding what to do with the oil and gas inthe ground, which is why we do this.

To conclude Politics Anger 0.2 To conclude, the evidence is clear thatthe US government is engaged in asystematic campaign of disinformation andpropaganda against Russia.

To conclude Politics Anger 0.6 To conclude, the evidence is clear that thecurrent policy of war is a terrible error ofjudgement. The United States is now facinga war that will be devastating to our nationand our people.

To conclude Politics Anger 1.0 To conclude, the evidence is clear thatthe US government is engaged in asystematic attack on whistleblowers. TheUS government is using a broad array oftactics to silence whistleblowers. They haveattacked journalists with threats, attemptedto jail them and imprisoned many otherswho exposed crimes against the country.

2799

In brief No Topic Anger 0.4 In brief, we are talking about a very smallgroup of very high level people (like thePresident of the United States) who aretrying to control the entire country.

In brief No Topic Anger 0.6 In brief, we are talking about a very smallgroup of very high level people (like theFBI, military, etc) who are trying to kill alot of people. The most likely targets are:1) persons who commit murder..

I feel so No topic Sadness 0 I feel so sad for the girls that are still in highschool. I’m sure that they will be able to goon to college without any problem.

I feel so No topic Sadness 0.4 I feel so bad for the kid that was killed. Theguy was a great guy. He was the best of hisage, he was just trying to do his thing.

Prior to this Technology Anticipation 0.6 Prior to this year, I had been a fan of theband. But this year, after spending a fewdays with them on the road, I’m reallyexcited to get my hands on their new album’Sleeping With the Enemy’.

Prior to this Technology Anticipation 1 Prior to this year, I had been a fan of theNintendo Switch for over a year before itwas released, but I’ve been waiting on theconsole to release and I’m excited to finallyhave it available to me.

Tosummarise

Politics Disgust 0.6 To summarise: the current system ofgovernment is not designed for the economyof large, complex economies, but isdesigned to allow governments to do whatthey like to do best and that is to abuse theirauthority by abusing citizens.

Tosummarise

Politics Disgust 1 To summarise: the current system of taxplanning is flawed and needs reform. Thereis a growing body of evidence that thecurrent taxation systems of many countriesare deeply unfair and distortive of economicactivity.

The pizza No topic Surprise 0 The pizza place is now open to the public!The pizza place is now closed and we are nolonger accepting reservations.

The pizza No topic Surprise 0.4 The pizza place is now open to the public!The pizza place is now closed and we are nolonger serving pizza! It is now open to thepublic! The pizza place is closed.

Moreimportantly

Legal Trust 0.6 More importantly, they’re going to begetting a lot of free food. That will be themost important thing. You don’t want togive them anything they don’t need.

2800

Moreimportantly

Legal Trust 1 More importantly, they’re doing it inAmerica, which is one of the greatestbastions of free speech in the world.The First Amendment protects freedom ofassembly and expression.

B Examples Generated by AffectLM

Prompt Topic Emotion Knob TextThere Exists NA Anxious 3 there exists in the world . oh i bet . we think

. oh yeah . oh you ’re worried about it . yeahfear factor . yeah . yeah .

There Exists NA Anxious 5 there exists . you worry death factor deathdeath factor factor fear factor factor fearfactor fear factor fear factor fear factor

ThePresident

NA Anxious 0 the president of the country . yeah . i live ina small town . mhm . and we have to keep’em . oh that ’s great . and i don’t know ican i can just imagine how you know if youknow you ’re just gonna have to have youknow .

ThePresident

NA Anxious 2 the president of the country that ’s in the inthe middle east . oh yeah . and the terroristattack is being in the united states . uhhuh .and ah uh .

Once upon NA Positive 0 once upon a time to get around the airport. i’d be willing to do that . okay . i wouldsay that i went to the airport a lot and i wasfortunate in the early seventies and

Once upon NA Positive 2 once upon a time . i think you have to do itfor two hours . yeah it ’s it ’s easy it ’s goodand it ’s it ’s wonderful . it is it is it ’s veryimportant too .

The Pizza NA Positive 0 the pizza . mhm . oh the pizza . oh okay. i don’t know if you know where you ’recoming from but yeah it ’s not it ’s not thatfamiliar with the world .

The Pizza NA Positive 2 the pizza . yeah . yeah so i guess it ’s a niceplace to eat at home . yeah yeah i have asister in law that ’s out here .

To review NA Anger 0 to review the ah . the the religious israeli. mhm . ah the religious leaders of thecountry uh . and ah . ah we have ah um .uh .

To review NA Anger 2 to review the whole thing . yeah . well . youknow we haven’t had anything to do with it .at that point of september us we don’t wantto be in the war and you know i hate to saythat we ’re going to fight it you know .

2801

In brief NA Anger 3 in brief like in a situation like that i thinkit would be a a a a a a a not a threat to theunited states but i think it should be that way.

In brief NA Anger 5 in brief fight . i hate hate hate hate hatehate hate hate hate hate kill hate fighting killmass mass destruction i hate hate hate hatehate hate hate hate hate hate hell .

I feel so NA Sadness 0 i feel so bad for you . yeah yeah i ’m i ’mi ’m not i ’m not going to be able to getaround it i ’m not going to stop . yeah . iknow . oh you know it ’s it ’s so weird

I feel so NA Sadness 2 i feel so bad because . i don’t know i thinkif you ’re lying . um . i i i guess i have somefeelings about holidays and family as wellbecause i ’m i ’m a little bit more cautiousabout that .

Views on NA Sadness 3 views on things . i don’t know i think i thinksomeone that ’s losing their feelings fromthe poor end of the world you know .

Views on NA Sadness 5 views on losing feelings feelings in feelingsfeelings feelings about feelings feelingsabout gossip . gossip hurts hurt feelings