ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards...

49
PIF 2017, Leuven, May 29-30, pg. 1 ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer Max Planck Institute for Psycholinguistics and Radboud University, Nijmegen, The Netherlands We generally experience everyday conversations as smooth and effortless. Corpus analyses are consistent with this impression as they have shown, for instance, that the modal duration of gaps between turns is only around 300 ms. As planning the first word of an utterance can easily take a second or more, this suggests that upcoming speakers already begin to plan their utterances while listening to the preceding speaker's utterance. A specific proposal concerning the temporal coordination of listening and speech planning has been made by Levinson and Torreira (2016, Frontiers in Psychology; Levinson, 2016, Trends in Cognitive Sciences). They propose that speakers initiate their speech planning as soon as they have understood the speech act and gist of the preceding utterance. In other words, speakers maximize the temporal overlap of listening and speech planning. In my talk, I will first argue that the corpus data present firm evidence for the coordination of communicative actions, but not for tight coordination of utterance comprehension and speech planning. I will then briefly review studies demonstrating (a) that both comprehending spoken utterances and planning them require processing capacity and (b) that these processes can substantially interfere with each other. These data suggest that concurrent speech planning and listening should be cognitively quite challenging. In the third part of the talk I will turn to studies examining directly when utterance planning in conversation-like situations begins. The results of these studies are mixed and imply that more complex models than Levinson and Torreira's proposal are needed to capture how listening and speech planning are coordinated in conversation. In the final part of the talk, I will outline how psycholinguists might work towards such models. Where do those semantic networks come from? Perceptual simulations and language statistics in cognition Max M. Lauwerse Tilburg University, The Netherlands Over the last few decades several theories have been proposed on the nature of semantic networks in cognition. The symbolic nature of the early proposals was heavily criticized by embodied cognition theorists who argued that words can only become meaningful through perceptual simulations. Semantic networks without perceptual grounding are therefore useful computational tools but bear little resemblance to semantic networks in the human mind. In our earlier work we have argued that what seems to be a dichotomy between symbolic and embodied cognition (and computational and psychological methods) is in fact a complementary approach to understand semantic networks. This presentation will focus on where semantic networks come from, by arguing that language has been shaped to provide statistical linguistic cues on the meaning of words. The extent to which language users rely on these statistical cues and perceptual simulations is modulated by at least the cognitive task, time course of processing, nature of the stimulus as well as individual differences.

Transcript of ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards...

Page 1: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 1

ABSTRACTS KEYNOTE SPEAKERS

Towards understanding conversation: A psycholinguist's perspective

Antje S. Meyer

Max Planck Institute for Psycholinguistics and Radboud University, Nijmegen, The Netherlands

We generally experience everyday conversations as smooth and effortless. Corpus analyses are consistent with this impression as they have shown, for instance, that the modal duration of gaps between turns is only around 300 ms. As planning the first word of an utterance can easily take a second or more, this suggests that upcoming speakers already begin to plan their utterances while listening to the preceding speaker's utterance. A specific proposal concerning the temporal coordination of listening and speech planning has been made by Levinson and Torreira (2016, Frontiers in Psychology; Levinson, 2016, Trends in Cognitive Sciences). They propose that speakers initiate their speech planning as soon as they have understood the speech act and gist of the preceding utterance. In other words, speakers maximize the temporal overlap of listening and speech planning.

In my talk, I will first argue that the corpus data present firm evidence for the coordination of communicative actions, but not for tight coordination of utterance comprehension and speech planning. I will then briefly review studies demonstrating (a) that both comprehending spoken utterances and planning them require processing capacity and (b) that these processes can substantially interfere with each other. These data suggest that concurrent speech planning and listening should be cognitively quite challenging. In the third part of the talk I will turn to studies examining directly when utterance planning in conversation-like situations begins. The results of these studies are mixed and imply that more complex models than Levinson and Torreira's proposal are needed to capture how listening and speech planning are coordinated in conversation. In the final part of the talk, I will outline how psycholinguists might work towards such models.

Where do those semantic networks come from? Perceptual simulations and language statistics in cognition

Max M. Lauwerse

Tilburg University, The Netherlands

Over the last few decades several theories have been proposed on the nature of semantic networks in cognition. The symbolic nature of the early proposals was heavily criticized by embodied cognition theorists who argued that words can only become meaningful through perceptual simulations. Semantic networks without perceptual grounding are therefore useful computational tools but bear little resemblance to semantic networks in the human mind. In our earlier work we have argued that what seems to be a dichotomy between symbolic and embodied cognition (and computational and psychological methods) is in fact a complementary approach to understand semantic networks. This presentation will focus on where semantic networks come from, by arguing that language has been shaped to provide statistical linguistic cues on the meaning of words. The extent to which language users rely on these statistical cues and perceptual simulations is modulated by at least the cognitive task, time course of processing, nature of the stimulus as well as individual differences.

Page 2: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 2

External and internal language approaches to uncover structure in the mental lexicon

Simon De Deyne

University of Adelaide, Australia and University of Leuven, Belgium

Throughout our lives, we learn the meaning of thousands of words, mostly through exposure to language. This behaviour can be predicted by lexico-semantic models that track how words co-occur in language. Like humans, these models need to solve an important inference problem to learn the meaning of a word beyond the immediate linguistic context in which they occur. While this inference problem in external language models can be tackled using low-dimensional representations (LSA, topics) or embeddings (word2vec), it's not clear whether this suffices to capture non-linguistic perceptual knowledge. One idea is that language itself might be considered a symbolic system that represents meaning via the relationships between (amodal) symbols, but it is also capable of capturing modal representations as these symbols refer to perceptual representations (Louwerse, 2011). If so, the question is to what degree these modal representations can be inferred from language.

In this talk, I will tackle this question by contrasting external and internal language models. External language models treat language as an entity that exists in the world, consisting of a set of utterances made by a speech community. An internal language model views language in terms of a stored mental representation, a body of knowledge possessed by the speakers (Taylor, 2012). Here we represent this internal knowledge as an empirical network derived from word association data. I will present an overview of studies using relatedness and similarity comparisons across a range of concepts to evaluate perceptual and emotive multimodal internal and external language models. Across these studies, external language models, compared to internal language models based on word associations, capture only a part of meaning of abstract, concrete and emotive concepts. Instead, the extra variance captured by internal models reflects more grounded representations for all these concepts.

Modelling semantic memory: Some key issues and constraints

Gabriella Vigliocco

University College London, UK

The past twenty years or so have seen the development of different computational models of semantic

representation often based on the distributional hypothesis, namely that some aspects of word meaning can

be captured by looking at co-occurring words in large linguistic corpora. More recently, the increasing size

and heterogeneity of the information available (e.g, social media data, images), the advent of more powerful

algorithms (e.g., neural networks) and the availability of large datasets of behavioural data (e.g., mega-

studies of lexical decision, naming, semantic decision, etc.) have further ignited interest in these approaches.

However, the cognitive plausibility of these models is not always clear and, moreover, it is often unclear how

(some of) these models can provide theoretical novel insights.

In this talk, I will take a step back and discuss a series of key issues and constraints in modelling semantic

memory that are not unique to distributional semantics but are crucial for any theory of meaning

representation. These issues include: (1) the distinction between learning and processing; (2) whether our

models are models of conceptual or lexical representation; (c) the degree of embodiment captured by these

models; (4) whether statistical co-occurrence is the core mechanism regardless of information type (e.g.,

linguistic, experiential). I will conclude my talk discussing perhaps the most important issue that models of

semantics need to take into account, namely, (5) the role of context as a multimodal source of information.

Page 3: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 3

ABSTRACTS LECTURES

Talk Session 1

T1.1 Prediction and integration of semantics during L2 and L1 listening

Aster Dijkgraaf, Robert Hartsuiker and Wouter Duyck

Ghent University, Belgium

Prediction plays an important role in current theories of language production and comprehension (e.g. Pickering & Garrod, 2013). Recently, researchers have started to explore the question of whether and under what circumstances predictive processing occurs during L2 speech processing (e.g. Dijkgraaf, Hartsuiker, & Duyck, 2016; Ito, Corley, Pickering, & Ito, in press). The goal of the present study was to test whether bilinguals predict upcoming semantic information to the same extent in L1 and L2, and whether any L2 disadvantage in predictive processing is caused by a processing delay or rather by reduced spread of semantic activation in L2 due to weaker associative connections.

Dutch-English unbalanced bilinguals listened to sentences with variable cloze probability. 500 ms before the onset of the target word in the sentence, participants were shown a four-picture display while their eye-movements were measured. The display contained either a picture of the target object or of a semantic competitor, and three unrelated objects.

There was a bias of fixations on the target object and on the competitor object, relative to the unrelated pictures, before hearing the target word could affect fixations. This prediction effect occurred both in L1 and L2. However, the bias towards the target object was stronger in L1 than in L2. The bias towards the competitor object was modulated by language and by the semantic distance between target and competitor; if the target and competitor were more strongly related, the bias of fixations on the competitor relative to unrelated pictures was larger. This effect was larger in L1 than in L2. A time-course analysis revealed that the effect of semantic distance between target and competitor on predictive looks to the competitor object started later in L2 than in L1. These results suggest that bilinguals predict semantic information in L2 and in L1, but that the spread of semantic activation during prediction is slower in L2.

References

Dijkgraaf, A., Hartsuiker, R. J., & Duyck, W. (2016). Predicting upcoming information in native-language and non-native-language auditory word recognition. Bilingualism: Language and Cognition, 1–14.

Ito, A., Corley, M., Pickering, M. J., & Ito, A. (in press). A cognitive load delays predictive eye movements similarly during L1 and L2 comprehension.

Pickering, M. J., & Garrod, S. (2013). An integrated theory of language production and comprehension. Behavioral and Brain Sciences, 36(4), 329–347.

Page 4: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 4

T1.2 Later lexical development in bilingual children

Anne White

University of Leuven, Belgium

The bilingual lexicon is not fully monolingual-like in either language. Research on the adult bilingual lexicon demonstrates convergence of naming patterns for common household objects to a partially merged pattern for a bilingual’s two languages. We present work concerning the developmental trajectory of naming of common household objects in Dutch-French bilingual and monolingual children. To this end we collected naming data for a stimulus set of nearly 200 household containers of monolingual French-speaking, monolingual Dutch-speaking and French-Dutch simultaneous bilinguals of 6 different age groups (5, 8, 10, 12, 14 year olds and adults) totaling to a number of 499 participants. Multidimensional scaling analysis on a group level using averaged naming patterns per age group suggest that convergence is present at all ages since the bilingual naming pattern is consistently situated in between both monolingual patterns. Apart from a language related dimension our representation also shows a clear age related dimension representing the developmental trajectory. On the individual level, analysis taking into account individual naming patterns and pairwise between-subject correlations shows that monolingual naming patterns in two different languages show a remarkable correspondence at younger ages (possibly driven by common strategies such as overextension of large categories). Over the course of development the naming patterns of monolingual children demonstrate increasing divergence as they learn the language specificities of their mother tongue. Bilingual children start off with a fully converged naming pattern which they maintain up till age 10 and start learning language specific idiosyncrasies from age 12 onwards hereby diverging their naming patterns, but never to the extent of monolinguals. These results seem to contradict the theories of both stable and gradual convergence proposed by Storms and colleagues. Based on these results we propose an alternative theory for bilingual lexical development: the gradual divergence theory.

T1.3

The multiple word association task in young language learners and its potential for the study of semantic knowledge development

Tessa Spätgens 1 and Rob Schoonen 2

1 University of Amsterdam, The Netherlands 2 Radboud University Nijmegen, The Netherlands

Free word association is frequently used to map the semantic networks of language users’ mental lexicons. De Deyne and his colleagues (De Deyne & Storms 2008, i.a.) have demonstrated the advantages of eliciting multiple associations compared to single associations in adults: more and qualitatively different semantic links are obtained in second and third responses. This exploratory study aims to extend these findings to young language learners. Acquisition research (i.e. Nelson, 2007) suggests that the development of semantic knowledge starts with a focus on context-dependent links, involving subjective relations and co-occurrence of concepts. This base is gradually expanded with context-independent knowledge, including intrinsic features and categorical relations. The multiple association task may shed more light on this developmental path by providing more detailed information on children’s preferences for specific semantic categories. Additionally, we administered a reading comprehension task to study the potential of the multiple

Page 5: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 5

association task as a measure of semantic knowledge that may be predictive of other language skills.

207 children (mean age 11;2) provided three associations to twenty frequent concrete nouns. The coding scheme used by De Deyne and colleagues was slightly adapted to match the categories that are important in acquisition. The main semantic categories are provided below, ranging from most context-dependent (left), to most context-independent (right).

Subjective

(personal)

Situational

(co-occurrence)

Feature

(intrinsic features)

Taxonomic

(categorical)

dog-cute dog-collar dog-fur dog-animal

The 12,420 categorized associations were analyzed using mixed effects modelling techniques, taking into account participant and item variation. As in adults, the number of taxonomic associations decreased across responses, while subjective and situational associations increased. However, contrary to adults, feature associations exhibited the same increasing pattern as taxonomic associations. Since the children provided fewer taxonomic associations overall, it appears we observed an intermediate stage in which feature associations have a different status in the semantic network, perhaps serving as a stepping stone towards the inclusion of more context-independent links. Individual scores in each associative category were calculated and modelled as predictors for the reading scores. Children who were able to produce taxonomic associations as second and third responses, were better comprehenders. Conversely, producing features as first responses coincided with worse reading performance, again suggesting a complementary developmental relation between feature-oriented and taxonomic knowledge. The findings highlight the potential of multiple associations for the study of semantic knowledge development.

T1.4

Immediate experience’ of phantoms: Problems with semantic psycholinguistic variables

Lewis Pollock

University College London, UK

Concreteness has historically been a very popular psycholinguistic variable, and is still frequently used as a proxy measure for some kind of ‘semantic’ psychological content. Over the last 5 decades, ‘concreteness effects’, whereby concrete stimuli enjoy processing advantages over abstract stimuli, have been repeatedly demonstrated in a variety of experimental paradigms by independent teams of researchers. These effects are taken as evidence that concreteness norms measure something that is neuropsychologically instantiated, and form the basis for various psychological accounts of conceptual representation and cognition.

However, I demonstrate that there are problems with the way that concreteness is operationalised in psycholinguistic experiments, and that these problems should make us cautious about relying on the concreteness variable to the extent that we have done so thus far. Most significantly, for a large number of items in the recently published Brysbaert et al. (2013) concreteness norm database of 40,000 words, the mean concreteness values do not reflect the judgements that individual participants made about them. Operationalising concreteness by using a numerical scale gives the illusion that participants actually treat concreteness as a scale. In reality, for all items with mean values located in the middle of

Page 6: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 6

the scale, participants were really making binary judgements and disagreeing with each other about whether a word was concrete or abstract. More worryingly, a survey of concreteness experiments employing various paradigms shows that the stimuli that make up the ‘abstract’ conditions in these experiments were not actually abstract. Instead, they are exactly those stimuli whose mean concreteness values do not reflect participants’ judgements. We might hope that the concreteness variable is a special case: perhaps other ‘semantic’ psycholinguistic variables do not suffer from these problems, and so they can be used instead of concreteness. I present preliminary evidence that unfortunately this is not true. Imageability ratings and more recently developed multimodal ratings exhibit exactly the same problematic distribution as concreteness scores.

I report two list memory experiments that controlled for this binary disagreement phenomenon and maximised the contrast in concreteness across conditions. Neither experiment produced a concreteness effect, which suggests that the issues I raise here should be taken seriously. However, there is some positive news, because there is a way around these problems: the use of newly emerging large-scale stimuli databases and stimuli selection algorithms can ensure that future experimental research avoids the vulnerabilities I identify.

Talk Session 2

T2.1 Chained melody: Low‐level acoustic cues as a guide to hierarchical structure in speech comprehension

Antony Scott Trotter, Rebecca L.A. Frost and Padraic Monaghan

Lancaster University, UK

To accurately process and respond to speech requires rapidly unpacking its structural dependencies as well as comprehension of meaning. Theoretical approaches are divided as to whether syntactic processing requires hierarchical phrase structures or lower-level statistical correspondences. In this study, we investigated the extent to which prosody – the rhythmic and melodic aspects of speech – may support low-level processing of complex syntactic structures. We hypothesised that elements of a sentence that contain syntactic dependencies would be similar in terms of pitch, enabling grouping according to the gestalt similarity principle.

We analysed data from American English speakers (n = 64) spontaneously producing either passive (e.g., [the bear]1 [held]2 [by the girl]3 [is green]4) or hierarchical centre-embedded (HCEs, e.g., [the bear]1 [the girl]2 [held]3 [is green]4) sentences elicited using a picture description task, and divided them into four phrase positions (see indices in examples). Using linear mixed effects modelling, we found a smaller pitch decrease from first to second positions in passives, and a smaller pitch decrease from second to third positions in HCEs. These different pitch progressions enable the similarity gestalt to support syntactic dependencies and deter interpretations consistent with canonical word order in passives, consistent with lowlevel computations driving syntactic processing.

Page 7: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 7

T2.2

An effect of lexical cueing on retrieval of sentence structure from explicit memory

Chi Zhang 1, Sarah Bernolet 2 and Robert J. Hartsuiker 1

1 Ghent University, Belgium 2 University of Antwerp, Belgium

Structural priming refers to the tendency for speakers to repeat the previously experienced sentence structure. One frequently observed phenomenon in structural priming is that the repetition of lexical items between prime and target enhances structural repetition (i.e., lexical boost). In contrast to an early theory, which attributed the lexical boost and structural priming to a unified model, a growing body of research posits a multifactorial account by which an error-based learning process and a lexically driven, memory-related process jointly contribute to structural priming. This account assumes that lexical repetition serves as a cue for speakers to retrieve sentence structure from explicit memory, which enhances the structural repetition in tandem with implicit learning processes (e.g., Hartsuiker et. al., 2008). Here we tested this assumption by investigating the effect of a lexical cue on the retrieval of sentence structure. In a sentence recall task, 48 subjects were instructed to memorize sentences with either a genitive construction (s-genitive/ of-genitive) or a transitive construction (active/passive), and recall the exact sentence after two intervening filler sentences. In the CUE condition, a noun from the to-be-recalled sentence was provided in the recall trial, while in the NOCUE condition, no cue was given. The position in the constituent structure (head noun/argument noun) and the linear position (first noun/second noun) of the lexical item were also manipulated. Lexical cues seemed to facilitate correct structural recall: Speakers recalled more correct syntactic structures when they were cued by lexical items. The effect was consistent across constructions, regardless of the original constituent position and the linear position of the lexical cues. Interestingly, the likelihood of correct structural recall among valid answers did not differ across cueing conditions, which indicates that the appearance of cues did not influence the structural bias (i.e., the frequency of both alternatives increased proportionally in the cue condition, while the frequency of other (non-valid) responses decreased). These findings indicate a possible interpretation of lexical cueing effect on sentence retrieval from memory: lexical repetition reinforces a shared representation between sentence encoding and recall that is unspecified for structure (perhaps a semantic representation) so that a cue will facilitate the regeneration of the memorized sentence while the structural bias of the sentence recall remains unchanged. This

Page 8: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 8

assumption will be further tested in a study that compares the cueing effect between structural priming and sentence structure recall.

T2.3

Encoding vs. decoding. Why do language users make sentence structure explicit?

Dirk Pijpops

University of Leuven, Belgium

Language is shaped by processing pressures from production, or encoding, and reception, or decoding (Hawkins 2004). Evidence of psycholinguistic experiments indicate that when both pressures counteract one another, the latter generally takes precedence (see a.o. Ferreira and Dell (2000) and references cited therein). In this study, we aim to complement this work with corpus research. The employed case study concerns the Dutch verb zoeken ‘to search’, where language producers have the choice whether or not to explicitly mark the object using the preposition naar ‘to’, as in (1)-(2).

(1) We zoeken alternatieven. (Sonar corpus, Oostdijk et al. 2013, WR-P-P-G-0000254655.p.11.s.5)

‘We are looking for alternatives.’

(2) Wij zoeken dan wel naar alternatieven. (Sonar corpus, Oostdijk et al. 2013, WR-P-P-G-0000488037.p.6.s.3)

‘We, then, look for alternatives.’

Figure 1: As the object becomes more complex, naar is more likely to be expressed.1

Using data from the Sonar corpus, we find that the likelihood of naar increases as the object becomes more complex (Figure 1). There are at least three possible ways to explain this relation, however. The first is that the strictly unnecessary preposition helps the addressee decode the sentence, and expressing the preposition is therefore especially called for when the object is complex (cf. Rohdenburg's (1996) Complexity Principle). The second is that naar functions as a way to buy time for the producer to formulate a complex object. Finally, the third states that naar is preferred with more complex objects because it allows the producer

Page 9: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 9

to extrapose such objects to postfield position. This study will attempt to disentangle these three possible explanations.

References

Ferreira, Victor and Gary Dell. 2000. Effect of Ambiguity and Lexical Availability on Syntactic and Lexical Production. Cognitive Psychology 40(4). 296–340.

Hawkins, John. 2004. Efficiency and complexity in grammars. Oxford: Oxford University Press.

Oostdijk, Nelleke, Martin Reynaert, Véronique Hoste and Ineke Schuurman. 2013. The Construction of a 500-Million-Word Reference Corpus of Contemporary Written Dutch. Theory and Applications of Natural Language Processing. 219–247.

Rohdenburg, Günter. 1996. Cognitive Complexity and Increased Grammatical Explicitness in English. Cognitive Linguistics 7(2). 149–182.

1 Effect plot of Object Complexity in a logistic regression model controlling for country and corpus

component (p < 0.0001, coefficient = 0.41). Object Complexity was measured as the natural logarithm of the number of words of the object. The grey area represent confidence intervals

T2.4

ERP study on the effect of linear distance in gender agreement processing in Dutch

Aida Salčić, Srđan Popov and Roelien Bastiaanse

University of Groningen, The Netherlands

Several factors have been identified as playing an integral role in computing syntactic agreement: the retrieval and checking of morphological features (e.g., number, person, or gender) and establishing dependencies between elements of varying distance, which is the focus of the present study. While most theories of sentence processing focus on the effect of long-distance dependencies in sentence comprehension (e.g., Dependency Locality Theory, or DLT – Gibson, 2000), only a handful of studies have explored the impact of short linear distance on agreement computation (i.e., the number of intervening words between agreeing elements; e.g., Alemán Bañón, Fiorentino, & Gabriele, 2011) . The present study aims to bridge this gap by exploring the influence of linear distance on establishing within-phrase (determiner phrase, or DP) gender agreement in Dutch. To this end, we used event-related potentials (ERPs) in order to obtain a unique insight into the temporal domain of sentence processing during reading. The stimuli consisted of grammatical and ungrammatical sentences with a gender violation in one of the two conditions (article-noun, or adjective-noun condition). The results of the study show that linear distance does influence the processing of gender agreement violations within the determiner phrase in Dutch, as evidenced by the elicitation of the P600 effect for gender violations – a component associated with syntactic reanalysis and repair. However, we also found that the larger the linear distance in a gender agreement relationship, the smaller the P600 effect, which runs against mainstream sentence processing theories, like DLT (Gibson, 2000). These results are explained by positing that the parser is less sensitive or less inclined to detect violations when the distance between elements is increased or in the presence of an interfering element (e.g., Kaan, 2002). Finally, we will discuss how the results of the current study provide the foundation for prospective ERPs research on gender agreement processing in adult native speakers of Dutch with dyslexia.

Page 10: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 10

Alemán Bañón, J., Fiorentino, R., & Gabriele, A. (2012). The processing of number and gender agreement in Spanish: An event related potential investigation of the effects of structural distance. Brain Research, 1456, 49-63.

Gibson, E. (2000). The dependency locality theory: A distance-based theory of linguistic complexity. In A. Marantz, Y. Miyashita, & W. O'Neil (Eds.), Image, Language, Brain (pp. 95-126). Cambridge, MA: MIT Press.

Kaan, E. (2002). Investigating the effects of distance and number interference in agreement processing: An ERP study. Journal of Psycholinguistic Research, 31, 165-193.

Talk Session 3

T3.1 An eye movement analysis of L1 and L2 studying.

Nicolas Dirix, Heleen Vander Beken, Ellen De Bruyne, Marc Brysbaert, Martin Valcke and Wouter Duyck

Ghent University, Belgium

In academics, the importance of English as a medium of instruction increases. For example, many international, English editions of handbooks find their way to the (mandatory) book list of university students. For Dutch universities, this obviously means that participants will have to study from second language (L2) handbooks. Investigating the impact of L2 studying (for example on test scores, study behavior, motivation,…) has only recently found its way to the field of psycholinguistics (e.g., Vander Beke & Brysbaert, in press). Usually, a slower processing of L2 compared to native language (L1) text is found, which under certain circumstances also results in a lower L2 test score. In the current study, we wanted to investigate the eye movement patterns of students studying texts in L1 and L2, to have a better understanding of the (difference in) reading processes of L1 and L2 studying.

A few studies already investigated eye movements of L1 studying (e.g., in comparison to other reading purposes, Yeari et al., 2014), but this was the first eye movement study to investigate L1 and L2 differences in studying, as well as differences between L2 reading and studying.

The eye movements of eighty participants were recorded while they read four texts (two both in L1 and L2). Half of the participants studied the texts, the other half were instructed to read them “for information”. All participants received a test about the texts afterwards, consisting of true/false statements.

The results showed that in general, and as expected, participants spent more time studying than reading, both for L1 and L2. However, the language x reading type interaction was significant for several dependent measures, pointing towards an additional slowing down for L2 studying compared to L1. Total reading times were 20% longer and about 15% more fixations were made. However, this time was well spent, as the test scores were identical for L1 and L2 studying. We also found influences of several covariates on reading times and test scores (e.g., sentence complexity, reading motivation).

Page 11: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 11

T3.2

Memory for texts in a second language: Recall cost or production cost?

Heleen Vander Beken and Marc Brysbaert

Ghent University, Belgium

With academic internationalisation at full speed, English is increasingly used as a medium of instruction in higher education. The question arises whether unbalanced bilinguals remember study materials in a second language (L2) as well as in a first language (L1).

In previous studies, we found a disadvantage for students recalling short, expository texts in L2 compared to L1, but no such disadvantage for a true/false recognition test. To find out whether this recall cost was caused by a lack of L2 production skills or a weaker mental model of the text in English, we decided to test recognition memory on the long term. If the memory traces are weaker, the rate of forgetting ought to increase in English. In that study, we found no significant difference between languages in recognition memory after a day, a week, or a month. This suggested that the recall cost does not originate in the storage or retrieval process, but in the production process.

In this study, we proceeded with a cross-lingual examination of short handbook texts. In the first experiment, students (N = 68) received text in English and one in Dutch, and were asked to answer open questions about the contents in Dutch, to test whether there was a comprehension difference. No significant difference was found. In the second experiment, students (N = 78) received both handbook texts in English, and were asked to answer open questions about the contents in Dutch for one text and in English for the other, to test whether there was a production difference. No significant difference was found either. In a third experiment, students (N = 145) received a longer academic text in English. They received a test in Dutch or English (between-participants) containing open questions and multiple choice questions. These data will increase our insight whether the recall cost that we initially found was a test-specific effect, a text-specific effect, or due to another factor.

T3.3

“That's a spatelhouder!”: How source memory is influenced by speakers' social categories in a word-learning paradigm

Sara. Iacozza 1,2, Antje S. Meyer 1,3 and Shiri Lev-Ari 1

1 Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands 2 IMPRS for Language sciences, Nijmegen, The Netherlands 3 Radboud University, Nijmegen, The Netherlands

Previous literature has repeatedly shown that we spontaneously encode people's social features when processing information they provide. This phenomenon leads to specific biases in source memory. That is, people are more likely to misattribute utterances to members of the same social category as the correct speaker than to individuals who belong to different categories. Further evidence suggests that this mechanism depends on how self-relevant the speaker's social category is for the perceiver: the more self-relevant, the more likely it will be encoded along with the information provided. Analogously, Sumner et al. (2014) have suggested that the social weight ascribed to particular speakers and contexts modulates how linguistic input is processed and stored.

In this study, we tested whether social information indeed influences word learning. Specifically, we tested whether the use of social information depends on a) its relevance for the learner and b) the in-group status of the speakers. Participants (N=124) learned competing novel labels for novel gadgets from speakers who supposedly attend different

Page 12: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 12

schools. Crucially, half of the participants learned labels from speakers attending their own school (in-group) or a neighboring school (out-group), and the other half learned from speakers attending self-irrelevant schools. Later, participants were tested on their memory regarding who produced which label (see Fig.1). Results indicated that learners encode the

speakers' school when learning the labels, as indicated by greater within-school confusion than between-school confusion (β=0.14, SE=0.04, z=9.88, p=.0001), but significantly more so when the schools are self-relevant than when they are not(β=-0.08, SE=0.02, z=-2.41, p=.016). Results also showed that participants were marginally more likely to rely on school affiliations when learning from in-group vs out-group members, but only when individual differences were controlled for (β=-0.05, SE=0.03, z=-1.66, p=0.096).

Our results demonstrate that speakers' social categories are indeed encoded along with the novel lexical items, but mostly when the social information is self-relevant for the learners. These results indicate that language learning and processing depends not only on the input we receive but also on who we receive it from.

Figure 1: Schematic illustration of the two tasks of the experiment.

References

Sumner, M., Kim, S. K., and King, K. B. M. (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception. Frontiers in psychology, 4:1{13.

Page 13: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 13

T3.4 Choosing is losing: Cognitive and affective processes in foreign language decision-making

Evy Woumans, Sofie Ameloot and Wouter Duyck

Ghent University, Belgium

Decision-making is part of everyday life, from choosing what to wear in the morning to deciding whether or not that new piece of furniture is worth the cost. We tend to believe that we are in charge of our own decisions and that nothing would subconsciously change them. Nevertheless, a recent study by Costa et al. (2014) shows that the language in which problems are presented may affect our judgement. The study observed that participants are more inclined to make utilitarian decisions when presented with the footbridge dilemma if it is framed in their foreign language (FL) and not their native language (NL). The reason for this is not yet entirely clear, but affective as well as cognitive processes are thought to be at the basis, although the latter have been gravely overlooked by previous FL research.

Our aim was to determine the role of both affective and cognitive processes in the FL effect. We employed three personal dilemmas with varying levels of emotionality (i.e. Footbridge, A Father’s Choice, and Sophie’s Choice), hypothesising the FL effect would be more robust in dilemmas with higher levels of emotionality, but disappear when the levels got excessively high. Furthermore, we explored the cognitive basis of the FL effect, including both a second (L2) and third (L3) language condition, supposing that the cognitive demands in the L3 condition would interfere with the FL effect.

Results showed that participants were more utilitarian in the FL as opposed to the NL condition, but only in the least emotional dilemma (Footbridge). Furthermore, the effect seemed to disappear in the more cognitively challenging L3 condition, indicating that cognitive resources were too caught up with FL processing to make a deliberative decision. Hence, we believe that the FL effect is dependent on both affective and cognitive interacting processes. Also, we propose that the FL effect follows the curve of an inverted U, only appearing when the level of emotionality in the dilemmas is sufficiently high, but disappearing again when it becomes overly high.

References

Costa, A., Foucart, A., Hayakawa, S., Aparici, M., Apesteguia, J., Heafner, J., Keysar, B. (2014). Your morals depend on language. PloSOne, 9 (4), e94842.

T3.5

Exploring the relational responding task (rrt) as a new measure of language attitudes

Laura Rosseel, Dirk Geeraerts and Dirk Speelman

University of Leuven, Belgium

For decades, quantitative language attitude research has known little methodological innovation (Speelman et al. 2013). Yet, in the last few years, linguists have started to overcome this deadlock and have turned towards social psychology for new attitude measures. Especially the Implicit Association Test (IAT) has proven a successful new addition to the sociolinguist’s toolbox (e.g. Campbell-Kibler 2012; Rosseel et al. 2015). Despite its relative success, the IAT has a number of limitations, such as the fact that it measures the association between two concepts (e.g. ‘I’ and ‘skinny’) without controlling for the relationship between those two concepts (e.g. ‘I am skinny’ vs. ‘I want to be skinny’). The Relational Responding Task (RRT), a novel implicit attitude measure recently developed by

Page 14: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 14

social psychologists (De Houwer et al. 2015), makes up for exactly that limitation by presenting participants with full propositions expressing beliefs rather than loose concepts.

In this paper, we will present research that explores the RRT as a novel measure of language attitudes. In our study, we investigate the social meaning of two varieties of Dutch: Standard Belgian Dutch (SBD) and tussentaal, a more colloquial variety which, according to some, is spreading and may be competing with SBD in certain contexts (Grondelaers & Speelman 2013). It has been hypothesized that the rise of tussentaal is enabled by a new modern type of dynamic prestige which competes with the traditional prestige of SBD. We use the RRT to check whether speakers indeed associate the two varieties with different types of prestige. In addition to presenting the results of this study, our paper will reflect upon the usefulness of the RRT as a new measure for (socio)linguists to study social meaning of language variation.

References

Campbell-Kibler, K. (2012). The Implicit Association Test and sociolinguistic meaning. Lingua, 122(7): 753–763.

De Houwer, J., Heider, N., Spruyt, A., Roets, A., & Hughes, S. (2015). The relational responding task: toward a new implicit measure of beliefs. Frontiers in Psychology, 6(article319).

Grondelaers, S., & Speelman, D. (2013). Can speaker evaluation return private attitudes towards stigmatised varieties? Evidence from emergent standardisation in Belgian Dutch. In T. Kristiansen & S. Grondelaers (Eds.), Language (De)standardisations in Late Modern Europe: Experimental Studies, 171–191. Oslo: Novus.

Rosseel, L., Speelman, D., & Geeraerts, D. 2015. Can social psychological attitude measures be used to study language attitudes? A case study exploring the Personalized Implicit Association Test. In Proceedings of the 6th Conference on Quantitative Investigations in Theoretical Linguistics.

Speelman, D., Spruyt, A., Impe, L., & Geeraerts, D. (2013). Language attitudes revisited: Auditory affective priming. Journal of Pragmatics, 52: 83–92.

Keywords: social meaning of language variation; language variation and change; Dutch; language attitudes; RRT

Talk Session 4

T4.1 Audiovisual recalibration of vowel categories

Matthias K. Franken, Frank Eisner, Jan-Mathijs Schoffelen, Daniel J. Acheson, Peter Hagoort and James M. McQueen

Radboud University Nijmegen, The Netherlands

One of the most daunting tasks of a listener is to map a continuous auditory stream onto known speech sound categories and lexical items. A major issue with this mapping problem is the variability in the acoustic realizations of sound categories, both within and across speakers. Past research has suggested listeners may use various sources of information,

Page 15: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 15

such as lexical knowledge or visual cues (e.g., lip-reading) to recalibrate these speech categories to the current speaker. Previous studies have focused on audiovisual recalibration of consonant categories. The present study explores whether vowel categorization, which is known to show less sharply defined category boundaries, also benefit from visual cues.

Participants were exposed to videos of a speaker pronouncing one out of two vowels (Dutch vowels /e/ and /ø/), paired with audio that was ambiguous between the two vowels. The most ambiguous vowel token was determined on an individual basis by a categorization task at the beginning of the experiment. In one group of participants, this auditory token was paired with a video of an /e/ articulation, in the other group with an /ø/ video. After exposure to these videos, it was found in an audio-only categorization task that participants had adapted their categorization behavior as a function of the video exposure. The group that was exposed to /e/ videos showed a reduction of /ø/ classifications, suggesting they had recalibrated their vowel categories based on the available visual information. These results show that listeners indeed use visual information to recalibrate vowel categories, which is in line with previous work on audiovisual recalibration in consonant categories, and lexically-guided recalibration in both vowels and consonants.

In addition, a secondary aim of the current study was to explore individual variability in audiovisual recalibration. Phoneme categories vary not only in terms of boundary location, but also in terms of boundary sharpness, or how strictly categories are distinguished. The present study explores whether this sharpness is associated with the amount of audiovisual recalibration. The results tentatively support that a fuzzy boundary is associated with stronger recalibration, suggesting that listeners’ category sharpness may be related to the weight they assign to visual information in audiovisual speech perception. If listeners with fuzzy boundaries assign more weight to visual cues, given that vowel categories have less sharp boundaries than consonants, there ought to be audiovisual recalibration for vowels as well. This is exactly what was found in the current study.

T4.2

The processing of Standard Dutch speech by Flemish students in the Netherlands

Cesko Voeten

Leiden University, The Netherlands Standard Dutch and Flemish phonetics differ in terms of vowels (Van de Velde 1996; Adank, van Hout, & Smits 2004; Adank, van Hout, & Van de Velde 2007) and consonants (e.g. Sebregts 2015, focusing on rhotics). At the level of the individual, these differences have not received much attention in terms of their consequences on linguistic processing, whereas studies on accent processing in general suggest interesting effects. In perception, findings exist of attenuated N400 ERPs (Goslin, Duffy, & Floccia 2012) and larger RTs (Floccia, Goslin, Girard, & Konopczynski 2006). In production, adjustments are also observed (Pardo, Gibbons, Suppes, & Krauss 2011), but not due to imitation (Pardo 2012).

The present study investigates the processing of Standard Dutch versus Flemish realizations of the vowels (e,ø,o,ɛ,ɛi,œy,ɑu) and the rhotic (r). Perception and production data will be discussed of first-year Flemish students in the Netherlands, who are in the process of adapting to the Standard Dutch accent. The production task (simple word-reading) shows differences already known from Adank, van Hout, & Smits (2004) and Sebregts (2015): less diphthongization for (e,ø,o,ɛi,œy,ɑu). A perceptual rhyme-decision task using stimuli ambiguous between Flemish and Dutch realizations, however, shows a more conservative placement of phonological category boundaries, suggesting adaption in the form of overcompensation.

Page 16: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 16

References

Adank, P., van Hout, R., & Smits, R. (2004). An acoustic description of the vowels of Northern and Southern Standard Dutch. The Journal of the Acoustical Society of America 116(3), pp 1729–1738.

Adank, P., van Hout, R., & Van de Velde, H. (2007). An acoustic description of the vowels of Northern and Southern Standard Dutch II: regional varieties). The Journal of the Acoustical Society of America 121(2), pp. 1130–1141.

Floccia, C., Goslin, J., Girard, F., & Konopczynski, G. (2006). Does a regional accent perturb speech processing?. Journal of Experimental Psychology: Human Perception and Performance, 32(5), 1276.

Goslin, J., Duffy, H., & Floccia, C. (2012). An ERP investigation of regional and foreign accent processing. Brain and language, 122(2), 92-102.

Pardo, J. S. (2012). Reflections on phonetic convergence: speech perception does not mirror speech production. Language and Linguistics Compass, 6(12), 753-767.

Pardo, J. S., Gibbons, R., Suppes, A., & Krauss, R. M. (2012). Phonetic convergence in college roommates. Journal of Phonetics, 40(1), 190-197.

Sebregts, K. (2015). The sociophonetics and phonology of Dutch r. Utrecht: LOT.

Van de Velde, H. (1996). Variatie en verandering in het gesproken Standaard-Nederlands (1935-1993). PhD dissertation, Radboud University Nijmegen.

T4.3

Spared learning of phonotactic regularities in aging: Evidence from speech errors

Merel Muylle 1 , Eleonore H. M. Smalle 2 and Robert J. Hartsuiker 1

1 Ghent University, Belgium 2 Université Catholique de Louvain, Belgium

How does aging affect speech production? Older people often report to experience more word finding problems and this may be related to failures in the retrieval of speech sounds. According to the Transmission Deficit Hypothesis (TDH; Burke et al., 1991; MacKay & James, 2004), the link between a word and its phonology becomes weaker as people get older. Whereas a word and its phonology are only linked by single connections, semantics are more widely interconnected and this protects the meaning from impairment. In general, this transmission deficit is determined by three important factors: frequency, recency, and aging. The current study aims to investigate how age-related weaker connections between phonological nodes affect phonotactic learning processes. Learning about phonotactic regularities occurs implicitly through repeated exposure to phoneme combinations. This implies that also in older people learning can be established because the connections are both recent and frequent. In order to confirm this hypothesis, we elicited speech slips in younger and older speakers using the Phonotactic Constraint Paradigm (Dell et al., 2000; Warker & Dell, 2006), which starts from the idea that speech errors can be seen as a measure of implicit learning; the more speech errors obey phonotactic constraints, the more evidence for learning. This paradigm involves non-words, which implies that top-down compensatory mechanisms cannot interfere with learning. Fifteen young students and fifteen normal healthy older adults repeated syllable sequences at a fast rate on four consecutive days. These sequences consisted of four consonant-vowel-consonant monosyllabic non-words. Crucially, some of the consonants were constrained so that they occurred only in onset or coda position depending on the medial vowel. The amount of

Page 17: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 17

learning each day was determined by the number of speech errors that followed the constraints relative to those that violated the constraints. Based on the predictions of the THD, we expected that the older group would produce more speech errors (because of the weak links between phonological nodes), but still would be able to learn the phonotactic regularities. Indeed, the older group had an overall higher error rate than the younger group. In line with Warker and Dell’s (2006) findings, both age groups showed evidence of learning. These findings suggest that despite the large number of speech errors, elderly people are still able to detect new phonotactic regularities. In conclusion, the results of this study provide additional evidence for the TDH.

T4.4

Perception and production interactions in non-native speech category learning: Between neural and behavioural signatures

Jana Krutwig 1, Makiko Sadakata 1,2, Eliana Garcia-Cossio 1, Peter Desain 1 and James M. McQueen 1,3

1 Donders Institute, Radboud University, Nijmegen, The Netherlands 2 University of Amsterdam, The Netherlands 3 Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands

Reaching a native-like level in a second language includes mastering phoneme contrasts that are not distinguished in one’s mother tongue – both in perception and production. This project explores how those two domains interact in the course of learning and how behavioural changes in both listening and speaking ability are related to traceable changes in the brain.

In order to disentangle the added value of production training in perceptual category learning, we systematically contrasted the combination of perceptual training with either related or unrelated production. Thirty-one native speakers of Dutch distributed between two groups participated in a 4-day high-variability training protocol on the British-English /æ/-/ε/ vowel contrast (multiple words spoken by multiple talkers). In the related production group (n=15) feedback on a perceptual categorisation task was combined with pronouncing the respective correct word on every trial, whereas it was combined with pronouncing a matched but phonologically unrelated set of words in the unrelated production group (n=16). Pre- and post-training measurements were taken of both production and perceptual abilities. All auditory stimulus words during the training were presented according to an adapted oddball paradigm, while the electrophysiological activity was recorded continuously. This, as well as a classical oddball test contrasting the trained English vowels and two Dutch ones, enabled us to track neural changes in auditory discrimination ability using the mismatch negativity response (MMN).

Behavioural results show that both participants’ perceptual and production ability significantly improved over the course of training, whereas no significant differences, neither in perceptual nor production learning, arose between the two groups. Despite this absence of a differential behavioural effect, preliminary analyses of the electrophysiological data posterior to the training indicate differences in MMN responses to the English vowel contrast between the two training groups. This implies that production training can enhance the neural perceptual response to non-native sounds. Further implications of those findings as well as on-going analyses tracing (neural) changes during the training will be discussed.

Page 18: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 18

Talk Session 5

T5.1 Delayed picture naming in bilinguals

Wouter Broos

Ghent University, Belgium

Previous studies have shown that second language (L2) speakers are slower during speech production than first language (L1) speakers (Gollan, Montoya, Cera, & Sandoval, 2008; Lagrou, Hartsuiker, & Duyck, 2011). Some hypotheses claim that this is due to a delay in lexical retrieval (Gollan et al., 2008). However, more recent studies found evidence that the delay is situated at the post-phonological stage (Hanulova, Davidson, & Indefrey, 2011). The current study used the delayed-picture naming paradigm in order to see whether articulation itself is slower in L2 than in L1. An additional question was whether phonological complexity of the picture names would influence reaction times in either task. Dutch-English unbalanced bilinguals were asked to perform both a regular picture naming task and a delayed picture naming task in Dutch (L1) and English (L2). During the delayed picture naming task, participants were told to withhold pronunciation until a cue appeared on the screen. Lexical retrieval and speech planning is therefore finalized before pronunciation, meaning that only speed of articulation is measured. Speakers were slower when naming the picture in L2 during the regular picture naming task but not in the delayed condition. Phonological complexity did not affect response latencies. Proficiency, however, did show an effect in that more proficient speakers were faster in the regular and delayed picture naming task. We conclude that articulation in itself is not significantly slower when bilinguals name pictures in their L2. References Gollan, T. H., Montoya, R. I., Cera, C., & Sandoval, T. C. (2008). More use almost always

means a smaller frequency effect: Aging, bilingualism, and the weaker links hypothesis. Journal of Memory and Language, 58(3), 787-814.

Hanulová, J., Davidson, D. J., & Indefrey, P. (2011). Where does the delay in L2 picture naming come from? Psycholinguistic and neurocognitive evidence on second language word production. Language and Cognitive Processes, 26(7), 902-934.

Lagrou, E., Hartsuiker, R. J., & Duyck, W. (2011). Knowledge of a second language influences auditory word recognition in the native language. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(4), 952.

T5.2

Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel

Amie Fairs, Sara Bogels and Antje S. Meyer

Max Planck Institute for Psycholinguistics, The Netherlands

In conversation, people engage in the dual-task of comprehending and planning speech. However, dual-tasking research has rarely used paradigms with two linguistic tasks. Here, we tested whether there is greater interference in performance with two linguistic tasks, compared to a linguistic/non-linguistic dual-task. Additionally, we investigated different task

Page 19: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 19

combinations to determine whether the amount of parallel processing depends on whether participants give an overt response in one of the tasks.

Three experiments with a similar design were carried out. Task one was syllable (linguistic) or tone (non-linguistic) identification, and task two was picture naming with written distractors, with SOAs of 0ms and 1000ms. In experiment 1, participants responded to both tasks (button press and speaking, respectively). In experiments 2 and 3, depending on the syllable/tone, participants named the picture or read the distractor aloud. We were interested in semantic interference in naming. At 0ms, if the two tasks are processed in parallel, we expect no semantic interference because it is absorbed into the slack period of task one processing. Conversely, the presence of semantic interference indicates serial processing.

In experiment 1, we found semantic interference in all conditions. Thus, when task one is responded to, identification and naming are carried out serially.

In experiment 2, we found semantic interference at both SOAs in the linguistic condition, indicating serial processing, but at neither SOA in the non-linguistic condition, suggesting some parallel processing. However, because we find no interference at 1000ms in the non-linguistic condition (suggesting parallel processing, which is not possible at this SOA), it is difficult to draw conclusions about the null effect at 0ms (this also does not replicate [1]).

In experiment 3 SOA was blocked (in line with [1]). We found semantic interference in all conditions. Thus, with a more predictable paradigm, participants engage in serial processing.

Our results show that naming with distractors is not processed in parallel with an identification or decision task, regardless of whether that task is linguistic. Experiment 2 demonstrates that with a subtly different paradigm, participants process tasks in a more parallel manner in the non-linguistic condition, suggesting that the processing system flexibly adjusts the seriality of processing. However, there are limitations to this flexibility, such that two linguistic tasks cannot be carried out in parallel.

[1] Piai, V., Roelofs, A., & Schriefers, H. (2015). Task choice and semantic interference in picture naming. Acta psychologica, 157, 13-22.

T5.3

Learning context impacts lexical interference in third language learning

Brendan Tomoschuk 1,2, Wouter Duyck 2, Victor S. Ferreira 1, Robert Hartsuiker 2 and Tamar H. Gollan 1

1 UC San Diego, USA 2 Ghent University, Belgium

Is learning a third language different from learning a second? Does the lexical system organize information from three languages separately, or does it organize items based on factors such as age-of-acquisition or language typology? Studies in applied linguistics suggest that second- and third-language information interfere with each other more than either one does with the native language (Williams & Hammarberg, 1998). Here, we investigate the cognitive mechanism underlying this interference (Experiment 1) and test a potential explanation for such interference (Experiment 2).

In Experiment 1, Dutch-English-French trilinguals performed a phoneme monitoring task (adapted from Colomé, 2001) in which pictures appeared on the screen followed quickly by a letter. Subjects decided whether or not the sound from the letter was present in the word. Critically, rejection trials either came from one of their other languages, or had no relation with the translations of the word. For example, in the English block, a no trial for

Page 20: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 20

the item girl showed m (from the Dutch translation of girl, meisje), f (from the French translation, fille) or p (a phoneme not present in any translation). Phonemes from a trilingual’s other languages resulted in higher error rates, and while performing the task in French, English interfered significantly more than Dutch. One explanation for this is that English and French interfere more because they are both nonnative, lower proficiency languages. Another explanation is that because bilinguals learn French in a Dutch (rather than English) context, they learn to regulate the connection between Dutch and French, but not between English and French.

To test whether language of instruction impacts interference, in Experiment 2, subjects learned words from an artificial language. Critically, half of the subjects learned words via Dutch retrieval practice, while the other half learned via English retrieval practice. Both groups then performed the phoneme monitoring task. Instruction language interacted with language of interfering phoneme such that the interference difference between Dutch and English in the third language differs based on language of instruction. These results suggest that language of instruction impacts language interference in a low proficiency third language, and that lexical interference is contingent on learning context.

References

Colomé, À. (2001). Lexical activation in bilinguals' speech production: Language-specific or language-independent?. Journal of memory and language, 45(4), 721-736.

Williams, S., & Hammarberg, B. (1998). Language switches in L3 production: Implications for a polyglot speaking model. Applied linguistics, 19(3), 295-333.

T5.4

Revealing the scope of planning sentence production in L1 and L2, and influential factors

Toru Hitomi and Robert J. Hartsuiker

Ghent University, Belgium

Cognitive mechanisms of disadvantages in second language (L2) speech production have been explained by some accounts yet still under debate. We focus on how much speakers plan their sentence production ahead of initiating articulation, called planning scope. The planning scope is operationalized as the number of lexical representations activated before articulation and has been claimed to vary depending on the cognitive load of tasks at hand. In the present study, assuming the higher cognitive demand for L2 production, we examined whether that the size of planning scope is smaller in L2 than in L1 in two experiments adopting the cross-modal semantic interference paradigm. Furthermore, we attempted to identify factors modulating the interference effect by running mixed-effects multiple regression models because the influential factors may indicate the possible locus of L2 disadvantages. Dutch-English proficient yet unbalanced bilinguals were required to produce sentences in a simple fixed syntactic structure using names of two visually presented line-drawings (e.g., “The lion is next to the umbrella.” / “De leeuw is naast de paraplu.” in Experiment 1, “A lion is next to an umbrella.” / “Een leeuw is naast een paraplu” in Experiment 2). Naming language was a within-subject blocked factor. Semantic interference was introduced by presenting an auditory distractor word that was semantically related to the first noun, the second noun or neither of them with the SOA of -100 ms. In both experiments, the second noun seemed not activated (i.e., no interference for the second noun) and there was no significant interaction between naming language and semantic relatedness. Our hypothesis that planning scope in L2 is smaller than in L1 was therefore not supported. Separate analyses for each language investigating the effects of lexical factors on RT however showed that lexical frequency of the second noun was consistently

Page 21: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 21

interacted with the semantic relatedness condition. Higher frequency led to faster RT only when the distractor was semantically related to the second noun but led to slower RT in the other two conditions. This modulatory impact of frequency may indicate that the second noun might have covered by the scope but its activation was not strong enough to trigger interference. These results suggest that there seem to be no disadvantage in L2 sentence production with regard to the planning scope size and frequency is a key factor modulating the planning scope regardless of language.

Talk Session 6

T6.1 The acquisition of morphophonology: Effects of phonology and type frequency

Tiffany Boersma 1, Judith Rispens 1, Anne Baker 1,2 and Fred Weerman 1

1 Universiteit van Amsterdam, The Netherlands 2 University of Stellenbosch, South Africa

Previous studies have shown differences in production accuracies of allomorphic forms. The /Id/ allomorph of the English past tense appears to be more difficult for children to accurately use than the other two, /d/ and /t/, allomorphs (Marchman, 1997; Marchman, Saccuman, & Wulfeck, 2004; Matthews & Theakston, 2006; Oetting & Horohov, 1997). Also, the syllabic allomorph of the third person singular –s in English is shown to be more challenging to produce correctly than the segmental one (Song, Sundara, & Demuth, 2009; Tomas, Demuth, Smith-Lock, & Petocz, 2015). In Dutch, differences in production accuracies have been found for the two past tense allomorphs and the allomorphs involved in plural formation (de Bree & Kerkhoff, 2010; Rispens & de Bree, 2013; Zamuner, Kerkhoff, & Fikkert, 2012). In all these studies, phonological complexity, phonotactic frequency and/or type frequency had an effect. These studies thus provide support for the role of phonology and frequency in learning morphophonological processes. However most of these studies focus on production accuracies and very few studies have also looked at perception. The objective of this study was to replicate and expand on the previous findings by testing both the Dutch past tense and the diminutive in not only a production but also a grammaticality judgement task. The Dutch past tense has two allomorphs based on stem final voicing properties with the /də/ allomorph having a higher type frequency than /tə/. The Dutch diminutive has five allomorphs with very different type frequencies (/jə/ > /tjə/ > /ətjə/ > /pjə/ > /kjə/) and phonological complexities. Both should inform on the role of frequency and phonological complexity and possibly confirm previous findings.

Typically developing children (N = 115, 5;1 – 10;3) were tested on a production and grammaticality judgement task with both real and nonce verbs and nouns. Linear mixed effects modelling was used to analyse the data taking age and nonverbal IQ into account. The results indicated that the voiced past tense allomorph /də/ was easier to produce and judge than the voiceless allomorph, which points towards an effect of type frequency. In the case of the diminutive, the phonologically most complex allomorph /ətjə/ and the allomorph with the lowest type frequency /kjə/ were hardest to produce and judge correctly. Taken together these results provide and replicate important insights into the factors affecting the acquisition of morphophonological processes and suggest that multiple factors need to be considered.

Page 22: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 22

T6.2 Capturing systematic language differences through distributional semantics: How to explain cross-linguistic dissociations in priming effects for complex words

Fritz Günther 1, Eva Smolka 2 and Marco Marelli 3

1 University of Tübingen, Germany 2 University of Konstanz, Germany 3 University of Milano-Bicocca, Italy

In studies using English item material, it has been shown that base words can be primed by semantically transparent derivations (with word pairs such as DISTRUST - TRUST), but not by semantically opaque derivations (with word pairs such as SUCCESSOR - SUCCESS) in an overt priming paradigm with long SOAs. However, recent studies employing German item material have found such purely morphological effects without a semantic relation between prime and target in the same paradigm. Hence, contrary to the English case, priming effects are observed for both transparent derivations (such as AUFSTEHEN – STEHEN (stand up – stand)) as well opaque derivations (such as VERSTEHEN – STEHEN (understand – stand)). It has been assumed that these results are caused by structural differences between the two languages, with German being more systematic with regards to morphology. In our study, we tested this hypothesis using

distributional semantics models, where word meanings are represented as high-dimensional numerical vectors derived from the words' distributional patterns in large corpora of natural language. To this end, we employed a compositional model for affixation in distributional semantics, the FRACSS model, to obtain compositional vector representations for transparent and

opaque complex words, in both English and in German. Additionally, we collected actual whole-word distributional vectors for the same complex words from large corpora of natural text. We then computed the similarities between the compositional vectors and the vectors for their base words, as

well as the similarities between the whole-word vectors and the vectors for their base words. Here, the similarity between the compositional vectors and the base word vectors serves as a baseline condition indicating the degree to which similarity can be expected for the morphological systems in

themselves. In both English and German, the compositional representations for opaque and transparent complex words show the same pattern with regards to the similarity to their base words. However, for the whole-word representations, another pattern emerges: relative to transparent

words, German opaque words are more similar to their base words than English opaque words are. These computational simulations speak in favor of a higher systematicity in German morphology vis-a-vis English morphology, and offer a data-driven explanation to the dissociation observed in previous behavioral studies.

T6.3

Evaluation of reading and spelling processes in Arabic literacy acquisition

Carole El Akiki, Alain Content and Philippe Mousty

Université Libre de Bruxelles, Belgium

Although Arabic is the fourth most spoken language in the world, only few studies target literacy acquisition in Arabic and even fewer focus on spelling. Arabic orthography involves

Page 23: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 23

interesting linguistic and orthographic features which can influence the acquisition of literacy and spelling skills. For instance, priming studies suggests the important role of morphology in the organization of the adult mental lexicon in both spoken and written Arabic (Boudelaa, 2014)

The present study attempts to characterize the processes involved in reading and spelling in Arabic. Particularly, we explored the impact of the word roots and patterns on reading and spelling among native Lebanese children from Grades 1 to 4.

We designed a battery of tests evaluating letters, words and pseudowords reading and spelling, reading word comprehension, phonemic awareness, phonological and morphological derivation skills, rapid automatized naming, auditory discrimination abilities and auditory memory span. Hundred and forty Lebanese bilingual children (Arabic/French) in Grade 3 and 4 were tested this year, among whom 120 were also tested last year. The results show that besides phonology, morphology plays an important role in word reading, already at an early level of literacy. The frequency and familiarity of both roots and patterns influence reading, the effect being more significant for patterns especially in pseudowords reading. In addition, we noted a significant difference between derivational and inflectional morphology skills.

References

Boudelaa, S. (2014). Is the Arabic Mental Lexicon Morpheme-Based or Stem-Based? Implications for Spoken and Written Word Recognition. In Saiegh-Haddad and Maletesha Joshi (Eds.), Handbook of Arabic literacy: Insights and Perspectives (31-54)

T6.4

Storage and decomposition in the processing of Dutch nouns: An electrophysiological approach using the mismatch negativity response

Hernán Labbé Grünberg

University of Amsterdam, The Netherlands

The notion of morphological complexity is central to the theoretical study of language. While linguistic productivity based on the combination of morphemes is obviously present in agglutinative languages like Finnish, evidence for morphology-based productivity has been harder to find in languages like English or Dutch. Psycholinguistic models have thus far proposed different combinations of storage and decomposition, but while some form of lexical storage is uncontroversial (Baayen et al. 2011), evidence for morphology-based decomposition of lexical items has been harder to obtain. Psycholinguistic research has used a myriad of psychometric tests (most notably, masked priming (Silva & Clahsen, 2008)) to look for evidence that speakers are able to manipulate linguistic elements smaller than words when processing language. However, during language processing, morphology cannot be separated from its casing of sound and meaning, making it difficult to attest its ontological status independently of phonology and semantics.

The mismatch negativity (MMN) event-related potential (Näätänen et al. 1988) offers an opportunity to probe the processing of language as it unfolds over time. The Mmn response is known to be sensitive to the morphological complexity of words and can be used to probe the neural activation of lexical memory traces (Pulvermüller & Shtyrov, 2006). In our experiment, we have used this elecrophysiological response to assess the decomposability of Dutch plural nouns.

Using linear mixed effects models to compare singular and plural nouns to acoustically matched non-existing words, we have obtained different patterns of cortical responses:

Page 24: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 24

While monomorphemic words show a lexical MMN, meaning they produce bigger responses on account of them having stronger memory traces than pseudowords (t=3.2), polymorphemic words have produced a syntactic MMN, meaning they produced smaller responses than their controls (t=1.19), most likely due to the retrieval of the stem priming the activation of the memory trace for the inflectional suffix. Our results, therefore, suggest that the memory traces of morphologically complex nouns and verbs are activated via their constituent morphemes, and not as whole-word lexical forms. Moreover, the early time range of this response (between 100 and 200 milliseconds) suggests that morphological processing occurs before semantic activation, and in parallel to the retrieval of lexical memory traces. These results invite us to think about the place of morphological processing in language comprehension and its relation to lexical representations, their semantics and their phonological realizations.

ABSTRACTS POSTERS

Poster Session 1

P1.1 Relationship between extensions and intensions in categorization: A match made in heaven?

Farah Mutiasari Djalal 1, James A. Hampton 2, Gert Storms 1 and Tom Heyman 1

1 University of Leuven, Belgium 2 City, University of London, UK

The present study investigated the relationship between category extension and intension for eleven different semantic categories. It is often tacitly assumed that there is a (strong) extension-intension link. However, a recent study by Hampton and Passanisi (2016) called this hypothesis into question. To conceptually replicate their findings, two studies were conducted. We employed a category judgment task to measure category extensions, whereas a property generation (in Study 1) and property judgment task (Study 2) were used to measure intensions. Using their method, that is, correlating extension and intension similarity matrices, we found non-significant correlations in both studies, supporting their conclusion that similarity between individuals for extensional judgments does not map onto similarity between individuals for intensional judgments. However, multi-level logistic regression analyses showed that the properties a person generated (Study 1) or endorsed (Study 2) better predicted her own category judgments compared to other people’s category judgments. This result provides evidence in favor of a link between extension and intension at the subject level. The conflicting findings, resulting from two different approaches, and their theoretical repercussions are discussed.

Keywords: category extension, category intension, categorization.

Page 25: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 25

P1.2 The (un)reliability of semantic priming

Tom Heyman 1, Keith Hutchison 2 and Gert Storms 1

1 University of Leuven, Belgium 2 Montana State University, USA Many researchers have tried to predict semantic priming effects using a myriad of variables (e.g., prime-target associative strength, feature overlap, co-occurrence frequency,…). The idea is that relatedness varies across prime-target pairs: cat is, for instance, more strongly connected to dog than to animal. This should, in turn, be reflected in the priming effect such that some word pairs should show a larger priming effect than others. However, it only makes sense to predict these item-level priming effects, if they can be measured reliably. In other words, researchers try to predict why cat primes dog more than it primes animal without first establishing that cat indeed primes dog consistently more. If these item-level priming effects aren’t reliable across subjects, then there is in fact nothing to predict. The goal of the present study was exactly to investigate the psychometric properties of semantic priming. More specifically, we estimated the split-half and test-retest reliability of item-level priming effects under conditions that should discourage the use of strategies (i.e., a short 200 ms SOA, and a low .25 relatedness proportion). The resulting, presumably automatic, priming effects proved to be extremely unreliable. A re-analysis of several published priming datasets from different labs revealed similar cases of low reliability. These results imply that previous attempts to predict semantic priming were not likely to be successful. However, semantic priming is not by definition unreliable. Several factors play a role, but the number of participants over which the average priming effect is calculated, seems particularly important in this respect. One study with an unusually high sample size (for a priming experiment) yielded much more favorable reliability estimates, suggesting that “big data”, in terms of items and participants, should be the future for semantic priming research.

p1.3

Neutral context presentation in grammatical priming tasks: Processing of inflected forms in Serbian

Tamara Popović, Marko Perić and Aleksandar Kostić

University of Philosophy in Belgrade, Serbia

In language priming tasks, the effects of various context words (primes) on processing of the target word is determined relative to the effects of neutral context condition. Traditionally, this neutral context is marked in experiments as a set of crosses or stars (***), which is considered to have no priming effect on the target word. Recent studies on grammatical priming in Serbian language have shown that reaction time is the slowest in the condition of a neutral context (Popović, Perić & Kostić, 2016). This makes the analysis of relative effects of inhibition and facilitation in grammatical priming unreliable. In Serbian language, grammatical congruency of two words is clearly defined by inflectional suffixes, making effects of inhibition and facilitation in grammatically incongruent and congruent situations calculable when theoretically neutral grammatical context is taken into analysis. In this study, 3 experiments have been conducted in order to test different experimental procedures of presenting neutral context in a language priming task. The first experiment had the standard priming procedure, where the screen with the prime was followed by a screen with the target word. A set of three stars (***) was used as a neutral context. In the second experiment we presented the prime and the target sequentially on one screen, next to each other, in order to simulate a situation similar to text reading. The neutral context was an empty placeholder, presented left of the target word, which disappeared before the

Page 26: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 26

target word was shown. The third experiment used the similar technique as the second, but, instead of an empty placeholder, a placeholder containing a set of three stars (***) was used. Reaction time in the first experiment was the longest for the neutral context and shortest for the congruent context, which is consistent with previous findings. In the second experiment, reaction times for the neutral and incongruent situations were equal, while, finally, in the third experiment, the reaction time was longest in the incongruent situation, shortest in the congruent situation, leaving the neutral situation in the middle. These results suggest that the experimental technique used in the third experiment is more adequate than traditional ones when it comes to grammatical priming studies.

P1.4

The production of metonymic expressions: Evidence from priming in Japanese

Mikihiro Tanaka

Konan Women’s University, Japan Figurative language such as metonymy (e.g., I bought Dickens instead of I bought Dickens’ book) is one of the unique expressions of human language. Although a considerable number of psycholinguistic studies has investigated how we ‘comprehend’ such metonymic expressions (e.g., Frisson & Pickering, 1999), little is known about how we ‘construct’ them.

The present study reports an experiment that investigated whether speakers prime metonymic expression in Japanese sentence production. In a recalled-based sentence production task (Ferreira, 2003), speakers encoded two sentences (the Prime and filler) into memory, then were asked to produce the Prime. Then speakers were presented with the Target and filler sentence, and were prompted to produce the Target. I examined the

form of participants' subsequent Target descriptions. There were three prime conditions: metonymic, non-metonymic or literal expressions (1). For recalling Primes and Targets, they comprised a subject NP only, allowing a metonymic or non-metonymic completion (2).

Results showed that participants were more likely to recall the metonymic expressions correctly after recalling the metonymic expressions (100%) than after recalling the non-metonymic (75%) and literal expressions (85%, ps <.005) as the Primes. In addition to this, the difference of the proportions of the non-metonymic and literal expressions was not statistically significant (p =.022)). These results suggest the priming effect.

In sum, this finding is not compatible with the ‘indirect’ model of figurative language in which the non-metonymic expressions are accessed prior to the metonymic expressions (e.g., Grice, 1975). Instead, it supports the ‘direct’ model in which the metonymic and non-metonymic expressions can be accessed directly (e.g, Frisson & Pickering, 1999).

(1) Prime: Shousetsuka-ga Toshokan-de, [Dickens-o yonda / Dickens no hon-o yon-da / Dickens-ni atta].

The novelist in the library [wrote Dickens / wrote Dickens’s book / met Dickens]. (metonymic /non-metonymic /literal)

(2) Target: Kashu-ga Concert-de, [Madonna-o utta / Madonna no CD-o utta].

The singer in the concert … [sold Madonna /sold Madonna’s CD]. (metonymic /non-metonymic)

Page 27: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 27

References

Ferreira, V. S. (2003). The persistence of optional complementizer production: Why saying “that” is not saying “that” at all. Journal of Memory and Language, 48, 379–398.

Frisson, S., & Pickering, M. J. (1999). The processing of metonymy: Evidence from eye movements. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 1366 - 1383.

Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics (Vol. 3, pp. 41–58). New York, NY: Academic Press.

P1.5

How are letters with diacritics represented? A study in French

Emeline Boursain and Fabienne Chetail

Université Libre de Bruxelles, Belgium

According to current theories of letter recognition, the identification of letters operates through abstract representations, independent from the font or the size of the stimulus. However, most theoretical models heavily rely on studies carried out in English and thus limit the description of the mechanisms of letter recognition to the 26 letters of the Latin alphabet. Yet, a large number of scripts using the Latin letters include letters with diacritic marks, as in French for example (à, â, é, è, ç…). It is therefore necessary to examine how letters with diacritics are processed. Some studies, especially in Turkish or Spanish, support the idea that letters with diacritics activate the same abstract representation than letters without diacritics (i.e., a = à). Given that there is no established consensus on how letters with diacritics are perceived in French, we investigated if they activate the same abstract identity as the analogue letter (i.e., a = à) or if they activate two different representations (e.g., a ≠ a as a ≠ z). In Experiment 1, the participants (n = 57) performed an alphabetical decision task. They had to indicate if the target letter presented was a letter of the French alphabet or not. Three primes were devised for each target: repeated prime (e.g., a-A), diacritic prime (e.g., à-A), and control prime (e.g., z-A). In Experiment 2, the participants (n = 48) performed a lexical decision task. They had to indicate if the target word –primed with the same three conditions (i.e., repeated: melon - MELON, diacritic: mélon - MELON or control: mulon - MELON) was a French word or not. In both tasks, the data showed shorter reaction times in the repeated prime condition compared to the diacritic and control prime conditions, but there was no significant difference between the diacritic and control prime conditions. These data are in favor of different abstract representations of the letter with diacritics and the analogue letter.

P1.6

Dual-tasking in language: Concurrent production and comprehension interfere at the phonological level

Amie Fairs, Sara Bogels and Antje S. Meyer

Max Planck Institute for Psycholinguistics, The Netherlands Conversation often involves simultaneous production and comprehension, yet little research has investigated whether these two processes interfere with one another. We tested participants’ ability to dual-task with production and comprehension tasks, and compared their performance on a dual-task with two linguistic tasks to a dual-task with one

Page 28: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 28

linguistic and one non-linguistic task. Task one (production task) was picture naming. Task two (‘comprehension’ task) was either syllable identification (linguistic condition) or tone identification (non-linguistic condition). The two identification tasks were matched for difficulty. Three SOAs (50ms, 300ms, and 1800ms) resulted in different amounts of overlap between the two tasks. We hypothesized that as production and comprehension use similar resources [1], there would be greater interference with concurrent linguistic than non-linguistic tasks.

At the 50ms SOA, picture naming latencies (task one) were slower in the linguistic compared to the non-linguistic condition, suggesting that the resources required for production and comprehension overlap more in the linguistic condition. At the later SOAs, latencies did not differ between linguistic and non-linguistic conditions. As the syllables were non-words without lexical representations, this interference likely occurs primarily at the phonological level. Latencies in both the linguistic and non-linguistic conditions were longer at the 50ms and 300ms SOAs than the 1800ms SOA, showing that participants engaged in parallel processing of the two tasks.

Identification RTs (task two) were longer in the linguistic condition across all SOAs, indicating that linguistic interference percolates through from the production to the comprehension task, regardless of SOA.

In a follow-up study, the task two stimuli were replaced with sine-wave speech versions of the syllables. Participants were told either that the sounds were computer-generated or were distorted syllables. We thus kept the auditory input constant and manipulated whether the sounds were heard as language or non-language. The results were inconclusive. Naming latencies (task one) and identification RTs (task two) did not differ between conditions at any SOA. However, latencies and RTs in this experiment were significantly longer than in experiment 1, potentially indicating that task difficulty masked any linguistic interference effects.

In sum, these results demonstrate that concurrent access to the phonological level in language production and comprehension results in measurable interference in both speaking and comprehending.

[1] Menenti, L., Gierhan, S. M., Segaert, K., & Hagoort, P. (2011). Shared language overlap and segregation of the neuronal infrastructure for speaking and listening revealed by functional MRI. Psychological Science, 22(9), 1173-1182.

P1.7

The similarities between the target and the intruder in naturally occurring person naming errors: A comparison between repeated and occasional naming confusions.

Dupont Manuel

Université de Liège, Belgium This study investigated the phenomenon of personal names confusions, i.e. calling a familiar person with someone else name. Two types of name confusions were considered: occasional confusions (i.e. confusions which appeared only once) and repeated confusions (i.e. confusions which appeared repeatedly). The main purpose of the present study was to compare these two types of personal name confusions and to specify the similarities and differences between them. Participants were asked to fill in two questionnaires (one for each type of confusions) in order collect information about the properties of the confusions. Results indicated that occasional and repeated confusions had some similarities (the correspondence of the gender, the age of the bearers of the confused names, the

Page 29: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 29

phonological similarity between the confused names, the valence of the relationship with the participant and the bearers of the names, the frequency of encountering and the state of stress when the confusions were committed) but also differences (the context of encountering, the duration of the knowledge of the two bearers, the presence of the inverse confusion, the facial resemblance, the kind of relationship shared by the participants and the bearers of the names, and the state of tiredness when the confusions were committed). In addition, regression analysis indicated that the difference of age of the bearers of the names, the facial resemblance and the phonological similarity of the names were significant predictors of the frequency of the repeated confusions.

P1.8

Professional translators as bilinguals: The use of cognates and non-cognates

Lore Vandevoorde

Ghent University, Belgium

The field of Cognitive Translation Studies (CTS) shares a number or research interests with bilingualism (Shreve, 2012). Cognates and non-cognates are a case in point. In Translation Studies, Shlesinger & Malkiel (2005) and Malkiel (2009) concluded that translators tend to choose a non-cognate translation over a cognate translation when both are (presumably) equally translationally equivalent (Kussmaul 1995, Kussmaul & Tirkkonen-Condit 1995). Evidence from psycholinguistic studies however suggests – based on the overwhelming evidence of the cognate facilitation effect (Schepens 2012: 157-158) – that translators, when confronted with an L1 non-cognate form and an L1 cognate form that are both possible translations of the L2 word, would be more likely to choose the cognate form over the non-cognate equivalent (if both forms are equally frequent).

These diverging hypotheses now raise a number of questions. Firstly, with respect to the generalizability of psycholinguistic results to all types of bilinguals, they especially incite some doubt as to whether the (psycholinguistic) results for cognates also apply to professional translators. Professional translators have had specific training and have built up a certain amount of translation expertise, of which metacognition is a key element (Alves 2015). Natural bilinguals and novice/student translators (typically the types of bilinguals who participate in psycholinguistic experiments) have not usually had such a training. Secondly, this divergence also puts into question the limited evidence from the experiments with cognates in Translation Studies. To date, it is not clear whether translated texts (as a genre) contain more or fewer cognates than non-translated texts, than texts produced by second language learners or by novice translators. The present study therefore wants to test the hypothesis that translators tend to use fewer cognates than native language users do, and how their usage of cognates compares to that of student translators. To this end, we will calculate the cognate ratios of different text corpora (we use MUST, a corpus containing translations produced by learners of French with Dutch as L1, and two sub-corpora of the Dutch Parallel Corpus, one containing texts translated into Dutch by professional translators and the other containing texts originally written in Dutch. We use a random sample of 30% of the documents in the Dutch Parallel Corpus as a development corpus to establish a cognate list for the language pair Dutch-French (Dutch-English will follow). We calculate the normalized levenshtein distance) between all Dutch and French content words and retain only those cognate pairs with a normalized levenshtein distance of 0.5 or higher (Schepens 2013. For the moment, we only take into account orthographic overlap. Once we will have established this ‘gold standard list’, we will be able to calculate for any Dutch text the cognate ratio with respect to French. and compare cognate ratios for various text types (translated, original texts, texts produced by L2 learners) to each other.

Page 30: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 30

We take the Dutch-French cognate ratio calculated on the Dutch Reference Corpus Sonar as a baseline.

References

Alves, Fabio. “Translation Process Research at the Interface. Paradigmatic, Theoretical, and Methodological Issus in Dialogue with Cognitive Science, Expertise Studies, and Psycholinguistics.” In Psycholinguistics and Cognitive Inquiries into Translation and Interpreting, edited by Aline Ferreira and John W. Schwieter, 17–40. Amsterdam & Philadelphia: John Benjamins, 2015.

Kussmaul, Paul. Training the Translator. Amsterdam & Philadelphia: John Benjamins, 1995.

Kussmaul, Paul, and Sonja Tirkkonen-Condit. “Think-Aloud Protocol Analysis in Translation Studies.” TTR 8, no. 1 (1995): 177–99.

Malkiel, Brenda. “Translation as a Decision Process. Evidence from Cognates.” Babel 55, no. 3 (2009): 228–43.

Schepens, Job, Ton Dijkstra, Franc Grootjen, and Walter J. B. van Heuven. “Cross-Language Distributions of High Frequency and Phonetically Similar Cognates.” Edited by Kevin Paterson. PLoS ONE 8, no. 5 (May 10, 2013): e63006. doi:10.1371/ journal.pone.0063006.

P1.9

Investigating the relationship between l1 and l2 collocational processing in the bilingual mental lexicon from a cross-linguistic perspective

Hakan Cangir

Ankara University, Turkey

Some researchers have attempted to model the mental dictionary of monolinguals and come up with some explanations. Some others, on the other hand, have been intrigued by the mental lexicon of bilinguals and multilinguals and put forward several suggestions. Each focused on different aspects of lexical acquisition either in a native language or a second language. These theories and models have been tested in many ways; however, no study, to the writer’s knowledge, has tried to shed light on the cross-linguistic processing of collocations in the bilingual mental lexicon. In addition, the issue of cross-linguistic lexical priming is investigated from a syntagmatic perspective with the help of a typologically different language, Turkish, which previous research appears to lack. It is assumed that frequency, congruence, and typological variety are likely to have an impact on lexical processing, collocations in particular.

The main tools exploited during the item development process is Corpus of Contemporary American English (COCA) and Turkish National Corpus (TNC), which are claimed to be largest corpora of their kind. The essential instrument for the priming study was DMDX to investigate response times and error rates in a cross-linguistic lexical decision task.

Building on lexical priming theory which suggests that every word is primed to occur with particular other words it collocates, the study attempts to refer to the Spreading Activation Model as the underlying theory to lexical activation and examine the cross-linguistic aspect of collocational priming in bilinguals. Furthermore, as the core framework for cross-linguistic collocational priming, Psycholinguistic Model of Vocabulary Acquisition in L2 is employed due to the two different language acquisition settings reflected in the study; i.e. English as a Second Language (ESL) and English as a Foreign Language (EFL).

Page 31: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 31

The initial phase of the study will be conducted in early April 2017 and the researcher is aiming to present the preliminary results of the experiment together with a comprehensive literature survey of lexical priming enhanced by corpora.

The findings could help find proof for cross-linguistic spreading lexical access at the collocation level and may explain whether or to what extent frequency of the collocation, congruence and typological variety will play a role in this process. Furthermore, the results of the current study could boost the language non-selective hypothesis in bilingual lexical access by providing proof for spreading cross-language collocation access by focusing on a understudied language (Turkish).

Key Words: Collocational Priming, Mental Lexicon, Bilingual, Corpora and Cross-linguistic

P1.10

Incidental learning of morphosyntax in a second language during conversation: The acquisition of stem allomorphy in German strong verbs by native speakers of Dutch

Eva Koch 1, Johanna de Vos 2 and Kristin Lemhöfer 2

1 Vrije Universiteit Brussel, Belgium 2 Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands

While SLA through immersion is very common, our knowledge about the underlying mechanisms of uninstructed SLA is restricted. Although there has been a growing body of psycholinguistic research investigating explicit and implicit L2 knowledge, learning and training (DeKeyser, 2003), comparative studies have often been biased toward advantages for explicit instruction and learning (Morgan-Short et al., 2012), and many studies tend to use a (semi-)artificial language paradigm (Rebuschat & Williams, 2012), limiting the generalizability of findings to SLA in natural contexts.

Our study investigates the acquisition of a morphosyntactic aspect in a natural language (stem-vowel change in German strong verbs) in a communicative, yet experimentally controlled context (De Vos et al., in prep.). We used a meaning-based conversational task to measure learning from native-speaker input and compared learning outcomes of advanced L2 German learners (L1 Dutch) in an implicit (n=10; a cover story concealed the study’s intentions) and explicit (n=10; learners were instructed to focus on strong-verb inflection) instruction condition.

In both conditions, the participant and the experimenter (L1 German) engaged in a scripted dialogue and produced, in turn, picture-based sentences containing either a stemvowelchanging or non-stemvowel-changing verb. Learning was measured in terms of participants’ improvement in accuracy after exposure, as compared to accuracy on no-input. Comparable amounts of learning were found for both groups; explicit instruction did not have an apparent added value. A retrospective interview revealed that participants in the implicit group had noticed the strong verbs but were unaware of the study’s learning purpose, suggesting that learning was incidental. Our findings illustrate that the principles of morphosyntactic learning can occur during conversation.

References

De Vos, J., Schriefers, H., & Lemhöfer, K. (in preparation). Naturalistic incidental spoken L2 word learning and retention: An experimental study.

DeKeyser, R. M. (2003). Implicit and explicit learning. In C. J. Doughty & M. H. Long (Eds.), The handbook of second language acquisition (pp. 313–348). Oxford: Blackwell.

Page 32: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 32

Morgan-Short, K., Steinhauer, K., Sanz, C., & Ullman, M. T. (2012). Explicit and implicit second language training differentially affect the achievement of native-like brain activation patterns. Journal of Cognitive Neuroscience, 24(4), 933–947.

Rebuschat, P., & Williams, J. N. (2012). Implicit and explicit knowledge in second language acquisition. Applied Psycholinguistics, 33(4), 829–856.

P1.11

How second language skills depend on cognitive functions

Evy Woumans, Sofie Ameloot, Emmanuel Keuleers and Eva Van Assche

Ghent University, Belgium

Lately, the psycholinguistic research field is buzzing with the idea that bilingualism may lead to enhanced cognitive functioning, but what about the other way around? Can enhanced cognitive functioning also lead to bilingualism? Naturally, the only way to acquire a language is being exposed to it. Still, Kapa and Colombo (2014) demonstrated that cognitive ability may actually predict the success of learning an artificial language. Regarding this study as an interesting precursor, we set out to determine whether natural second language (L2) learning also depends on certain cognitive control skills.

Hence, we monitored the progress of 40 children in an L2 immersion context when they first started to acquire their new form of speech. Employing a longitudinal design, we had measures of both cognitive control and working memory before any L2 acquisition took place (T0) and after one school year of L2 exposure through immersion (T1). This also gave us the opportunity to take into account the children’s first language (L1) proficiency at baseline and its development in an immersion environment. Task administration included measures of fluid intelligence, attentional shifting, inhibitory control, working memory, L1 vocabulary, and socioeconomic status.

Our results were more or less a reflection of Kapa and Colombo’s. Initial scores on measures of inhibitory control, attentional shifting, and working memory were predictive of L2 vocabulary acquisition. At the same time, progress on IQ, inhibitory control, attentional shifting, and L1 vocabulary were also identified as contributing factors. It thus seems that not only initial cognitive abilities but also the rate of cognitive development determines the pace of L2 learning. Furthermore, we found that variables influencing L2 learning are not necessarily involved in further L1 development. Here, socioeconomic status seems to the play the bigger role.

References

Kapa, L. L. & Colombo, J. (2014). Executive function predicts artificial language learning. Journal of Memory and Language, 76, 237-252. doi:10.1016/j.jml.2014.07.004

Page 33: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 33

P1.12 The influence of extreme language control on cognitive control: Neural and behavioral changes after interpreting training

Eowyn Van de Putte, Wouter De Baene and Wouter Duyck

Ghent University, Belgium

In this neuroimaging study we wanted to investigate the influence of extreme language control on cognitive control. More specific we wanted to see whether there are anatomical or functional changes in the brain after interpreting education in comparison to translating education. We used a longitudinal design in which we had 3 fMRI tasks that administered executive functioning and a DTI scan to investigate structural connectivity changes in the brain. These were administered before and after 1 year translation or interpreting training. If the bilingual advantage exists and is a consequence of simultaneous activation of two languages then this advantage certainly should appear in interpreters, because during interpreting an extreme between-language control takes place. They have to reformulate a spoken message from one language into another language and therefore both language systems need to be simultaneously activated for comprehension and production. Therefore we expect that the competition for selection that takes place between languages should lead to enhanced domain general cognitive control and structural or functional changes in the brain areas that rely on cognitive control.

As a control group students who followed translating education were used, to reduce other differences between the two groups than the interpreting experience itself. Previous education and the amount of languages they knew was comparable.

We did find significant but small neural differences between the interpreters and the translators after 1 year education. We used a very conservative approach by comparing interpreters with translators, instead of comparing interpreters with monolinguals. Therefore although the differences were small, these differences are still apparent.

P1.13

BIA+: a rational reconstruction

Stéphan Tulkens, Dominiek Sandra and Walter Daelemans

University of Antwerp, Belgium

The Bilingual Interactive Plus model (BIA+) (Dijkstra & Van Heuven, 2002) is one of the most successful computational models of bilingual word recognition. Despite its success, no full implementation of the BIA+ model currently exists. While there is a partial implementation, based on the original BIA model (Van Heuven & Dijkstra, 1998), and an implementation called SOPHIA, both of these models do not implement semantics or the task decision system.

We report on our progress on a rational reconstruction of the BIA+ model. Rational reconstruction is a methodology from the field of Artificial Intelligence in which the natural language description of a model is used to completely implement the model as a computer program.

Through rational reconstruction, we show that the BIA+ model is not fully described in its publication and subsequent publications. While the BIA+ model has been fully described in terms of which components it possesses, there is no description of exactly how these components function.

Page 34: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 34

Additionally, the reconstruction shows that BIA+ perhaps allots too much explanatory weight to the mechanism of language nodes. In BIA+, interlingual homographs, e.g. Dutch - English ROOM, have multiple orthographic representations in the lexicon, one for each language. The reasoning behind this is that participants show inhibition when presented with such a word in a language decision task (Dijkstra & Van Heuven 1998, Dijkstra; Grainger & Van Heuven, 1999). For intralingual homographs, e.g. English LEAD, similar effects have been observed (Kawamoto & Zemblidge, 1992). If the BIA+ model can account for inhibitory effects for intralingual homographs, then there is no need to posit separate representations based on language nodes, as phonology by itself is enough to differentiate between interlingual homographs. If the BIA+ model can’t account for intralingual homographs, the organisation of the model needs to be revised or clarified.

References

Dijkstra, Ton, and Walter JB Van Heuven. "The architecture of the bilingual word recognition system: From identification to decision." Bilingualism: Language and cognition 5.03 (2002): 175-197.

Van Heuven, Walter JB, Ton Dijkstra, and Jonathan Grainger. "Orthographic neighborhood effects in bilingual word recognition." Journal of memory and language 39.3 (1998): 458-483.

Kawamoto, Alan H., and John H. Zemblidge. "Pronunciation of homographs." Journal of Memory and Language 31.3 (1992): 349-374.

Dijkstra, Ton, Jonathan Grainger, and Walter JB Van Heuven. "Recognition of cognates and interlingual homographs: The neglected role of phonology." Journal of Memory and language 41.4 (1999): 496-518.

Poster Session 2

P2.1 Phonological bootstrapping with naïve discriminative learning

Giovanni Cassani

University of Antwerp, Belgium

This work primarily aims to investigate the hypothesis of phonological bootstrapping (Cassidy & Kelly, 1991) further, using a psychologically motivated learning framework. Phonological bootstrapping states that children can start to categorize words into coarse lexical categories (mainly nouns and verbs) by attending to the segmental phonological information contained in words. Preliminary research using behavioral and computational methods has provided evidence that this is a viable strategy and could provide a useful bootstrap to lexical category acquisition.

We extend previous research in three ways. First, we make use of a psychologically motivated learning algorithm, namely the Naïve Discriminative Learning (NDL) model (Baayen, Milin, Durdević, Hendrix, & Marelli, 2011), a perceptron that adjusts cue-outcome connections using the Rescorla-Wagner update rule (Rescorla & Wagner, 1972). Second, we only consider surface phonetic features, encoding utterances from Child-directed speech as sets of tri-phones or syllables, thus dispensing with hand-crafted features. Third, we train on phrases spanning multiple words, coming closer to the real challenge faced by children than it has previously been done by training on individual world.

Page 35: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 35

Preliminary results suggest that training on phrases makes the task much harder. Close inspection of the matrix of associations resulting from training reveal a strong effect of frequency on the activations of lexical outcomes given an unseen word to categorize: regardless of the phonetic features of the unseen word, the same highly frequent lexical outcomes come out as most active. Little category structure emerges from training, with very few phonetic cues discriminating one lexical categories over the others. These results are discussed considering the specification of the NDL model and how these relate to the task of learning a categorical structure from surface phonetic features.

References

Baayen, R. H., Milin, P., Durdević, D. F., Hendrix, P., & Marelli, M. (2011). An amorphous model for morphological processing in visual comprehension based on naive discriminative learning. Psychological Review, 118(3), 438-481. doi:10.1037/a0023851

Cassidy, K. W., & Kelly, M. H. (1991). Phonological information for grammatical category assignments. Journal of Memory and Language, 30(3), 348-369. doi:Doi 10.1016/0749-596x(91)90041-H

Rescorla, R. A., & Wagner, A. R. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A. H. Black & W. F. Prokasy (Eds.), Classical conditioning II: Current research and theory (pp. 497). New York, NY: Appleton-Century-Crofts.

P2.2

Phonology-to-semantics mapping in visual word recognition

Simona Amenta, Marco Marelli and Simone Sulpizio

University of Ghent, Belgium

In the present study, the role of phonological information in visual word recognition is investigated by adopting a large-scale data-driven approach that exploits a new consistency measure based on distributional semantics methods.

Recently, it was shown that the consistency between an orthographic string and the meanings to which it is associated in a large corpus is a relevant predictor in lexical decision experiments (Marelli et al., 2015, QJEP, 68(8), 1571-1583). Orthography-Semantic Consistency (OSC) is a measure of how well the meaning of a given word can be predicted from its form, and is operationally defined as the degree of relatedness of a word meaning and the meaning of all the members of its orthographic family, that is all the words that “contain” the target word. Mathematically, OSC is computed as the frequency-weighted average semantic similarity between the vector of a given word and the vectors of all its orthographic relatives. Marelli et al. showed that OSC has a general effect in visual word recognition by testing it on 1821 words randomly selected from the stimuli included in the BLP database (Keuleers et al., 2012, BRM, 44(1), 287-304).

Exploiting irregular mappings between orthography and phonology in English (e.g., ough is pronounced differently in rough, dough, ought, through, thought, although), we were able to compute a Phonology-to-Semantics Consistency measure (PSC) that dissociates from OSC. Analogously to OSC, PSC is computed as the degree of relatedness of a word meaning and the meaning of all the members of its phonological family. We defined as phonological relative each word that, in its phonological form, contains the phonological sequence of the target word (e.g., “cognac”-/ k njæk/ for “yak”-/’jæk/).

Page 36: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 36

We tested both OSC and PSC on lexical decision response latencies to 533 words containing grapheme sequences that are associated to multiple phonological forms extracted from the BLP. This criterion was aimed at maximizing the difference between the semantic information associated to the orthographic form vis-a-vis the phonological form, as captured by OSC and PSC.

Results showed that both orthography and phonology are activated during visual word recognition. However, their contribution is crucially determined by the extent to which they are informative of the word semantics, with phonology playing a crucial role in accessing word meaning. This is a direct evidence that readers use phonological information to activate semantics, even when the task would not apparently require it (visual word recognition).

P2.3

Serial order effects in short-term memory: New Perspectives from Nonword Repetition

Kirsten Schrayen 1,2, Astrid Geudens 1, Pol Ghesquière 3 and Dominiek Sandra 2

1 Thomas More UC, University of Leuven, Belgium 2 University of Antwerp, Belgium 3 University of Leuven, Belgium

Background & rationale

Researchers widely accept that disabled readers exhibit problems with phonological skills in tasks involving short term memory (STM) (Ramus, 2003). One suggestion is that disabled readers’ have qualitatively inferior phoneme representations. The central question of our studies was whether the quality of phoneme representations is the only crucial factor to ensure adequate encoding, storage, and/or retrieval of phonological representations, or whether other phoneme-related factors also affect literacy skills.

Studies & procedure

As our focus was on STM phonological representations we used the Nonword Repetition Task. Asking participants to repeat nonwords obliges them to use their STM for storing and retrieving a phonological representation. The correct repetition of a nonword requires (i) retention of phonemes’ identity and (ii) retention of phonemes’ serial order.

Focusing on these variables makes it possible to reach a precise understanding of NRT performance by using only a single set of items (nonwords). This allowed us to follow the rationale that has been used in recent studies (for a review, see Majerus & Cowan, 2016): making a distinction between participants’ skill in the retention of items’ identity and serial order in STM.

In a first study (Schraeyen et al., under review), we compared 45 adults with dyslexia and 46 controls. In a second study (Schraeyen et al., in press), we tested a group of 89 Dutch-speaking third-graders with different literacy skills. Retention performance was scored on binary variables.

Results & Discussion

We used generalized linear mixed-effects models to analyze Phoneme Identity performance and Serial Order performance, treating Group (dyslexics, control) or Literacy (Studies 1 and 2, respectively), Syllable Length, Literacy x Syllable Length, and one of the phoneme-related dependent variables as fixed effects. The latter predictor was included to remove its correlation with the outcome variable. Participants and items were included as random effects.

Page 37: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 37

The results from Study 1 showed that dyslexic adults did not differ from matched controls at the level of retention of phoneme identity but performed significantly worse on the retention of serial order. The results of Study 2 converged with these findings. Children’s literacy level predicted their performance in the retention of phonemes’ serial order but not performance in the retention of phoneme identity.

More skilled readers’ superior retention of phonemes’ serial order might be the result of a better developed domain-independent STM mechanism. However, as serially ordered phonemes form higher-order phonological units, governed by language-specific phonotactic constraints, phonotactic knowledge in long-term memory might offer an alternative account.

P2.4

Classification of phonological and orthographic errors in children with and without familial risk for dyslexia playing a serious computer game.

Toivo Glatz, Francien van Dijk, Marieke Van de Merlen, Wim Tops and Ben Maassen

University of Groningen, The Netherlands

Purpose:

GraphoGame (GG) is a serious game designed to train grapheme-phoneme correspondences in children. GG can boost reading abilities in transparent writing systems like Finnish. In more opaque languages, like English and Dutch, there is no conclusive evidence if such an approach supports long-term reading development. In addition, evaluating GG effectiveness with reading accuracy and timed reading alone may not be sensitive enough to track improvements. We, therefore, also explore the in-game data created by school aged children playing this educational computer game during school hours or at home.

Method:

For a 5-7 week playing phase, 88 first-graders (aged 6;3) played a Dutch adaptation of GG. Eighteen of these children had a familial risk for dyslexia (FR) and about half of them are expected to become dyslexic, while the remaining 70 have no familial risk (NFR). In the first and the last playing session children were exposed to a letter-speech-sound-association task which contains highly confusable distractors (e.g. visually presented <b>,<p>,<d>,<q> after hearing /d/). Subsequently, we isolated all wrong responses and classified the errors to be either phonological, visual-orthographic, a combination of both, or unclassifiable.

Results:

NFR children made significantly less errors after several weeks of playing GG, and this effect applies to all types of errors (all p < .01). FR children on the other hand did only make marginally less errors of the visual-orthographic category (p = .064) but other error types remain unchanged (p > .3).

Conclusion:

GG has been played by over 200000 children in a variety of languages. By experimental psycholinguistic research standards, we are dealing with a very large behavioural dataset and there is little previous work on how to make sense of individual playing data. For this research, we manually explored a small fraction of the available data and found that children with familial risk for dyslexia can be distinguished from their peers based on the changes of their error pattern over the gameplay, as indicated by confusability in a letter-speech-sound-association task. Possible applications for our work are the detection of

Page 38: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 38

populations at risk for reading impairments earlier than currently possible with standardized pencil and paper test. Furthermore, there is potential in the further optimization of computerized training and dynamic assessments of reading skills, by means of adaptive modeling of individual knowledge.

P2.5

Visual statistical learning in school-aged children

Merel van Witteloostuijn, Imme Lammertink, Paul Boersma, Frank Wijnen and Judith Rispens

University of Amsterdam, The Netherlands

Statistical learning refers to the human ability to extract statistical properties from auditory and visual patterns in the world around us. In the lab, it is usually tested through exposure to a continuous stream of stimuli, followed by an offline test phase that consists of two-alternative forced choice (2-AFC) questions. However, this research method has yielded mixed results in children. We assessed children’s visual statistical learning (VSL) abilities through an online reaction time (RT) measure and two distinct offline question types. In doing so, we set out to investigate whether these novel methods of evaluating VSL can track learning over time and more carefully measure outcome accuracy.

53 Dutch-speaking children aged between 5;9 and 8;7 (M = 7;3) participated in the present study. They performed a VSL task that consisted of a familiarization phase and a subsequent test phase. During familiarization, children were exposed to a continuous stream of 12 stimuli, which were organized into four groups of three stimuli (triplets) presented 24 times each. Importantly, participants determined the presentation speed by pressing a button to proceed to the following stimulus (Siegelman, Bogaerts, & Frost, submitted). We expected slower RT’s for unpredictable stimuli (stimulus 1) than for predictable stimuli (stimuli 2 and 3 within triplets). Following familiarization, participants completed 24 2-AFC (choose the correct triplet, chance = 50%) and 16 3-AFC (fill the blank to complete triplet, chance = 33%) questions in the test phase of the experiment.

Results revealed that children’s performance did not exceed chance level on the traditional 2-AFC questions (51%, p = .37). Based on these results, one may conclude that no learning took place during this task. However, the children performed significantly above chance on the 3-AFC questions (38%, p = .048). The online measure demonstrated additional evidence that participants are sensitive to the triplet structure, as RT’s were slower for stimulus 1 than for stimulus 2 (p < .001) and 3 (p ≤ .01). Thus, these results underline the importance of using an online and several offline measures when assessing VSL in children. In our ongoing project, we will use those methods to measure statistical learning abilities in children with dyslexia in order to investigate the hypothesis that literacy deficits in dyslexia stem from an underlying problem with statistical learning.

References

Siegelman, Bogaerts & Frost (submitted). The dynamics of statistical learning: What does an online measure reveal about the learning of quasi-regularities? Cognitive Science.

Page 39: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 39

P2.6 Surprisal as an Index of Semantic Predictability in Naturalistic Language Processing

Ye Zhang 1, Diego Frassinelli 2, Jyrki Tuomainen 1 and Gabriella Vigliocco 1

1 University College London, London 2 University of Stuttgart, Stuttgart, Germany

Contexts can make some upcoming words more predictable and cause a decreased N400 effect (Kutas & Federmeier 2000). But current studies on N400 often employ congruent or incongruent conditions that are manipulated. Can predictability of words be matched to N400 amplitude in naturalistic sentences? Frank, Otten, Galli and Vigliocco (2015) found that surprisal (a measure of predictability) of words computed by n-gram model based on linguistic contexts has a linear relationship with the N400 amplitude. In the current study, we are testing whether the result is replicable.

Our materials were selected from language, and 9 participants performed word-by-word reading tasks. In the first stage of data processing, we focused on the verbs in our sentences and found N400 effect. Then, we separated words in to high and low surprisal groups (surprisal differ significantly, t-test p<0.001). The high surprisal group showed a more negative N400 signal in Fz Cz and Pz from ERP plot. In the next stage, we are analysing data from all words and fitting the data into a linear mixed-effect regression model following the practice of Frank et.al.

Our replication will provide us with new insight. So far we proved we can generate N400 effect in a naturalistic setting, with corpus sentences instead of manipulated ones. The surprisal effect from ERP plot suggested that the surprisal may be related with N400 amplitude. If we succeed in replication, we can support the hypothesis that predictive process is related with N400 effect and language processing.

If the linear relationship between computational model and N400 can be replicated in visual stimuli, we will test other modalities including audio and audio-visual. In the audio condition, we will let participants hear the recorded sentences to test whether the effect of surprisal (if exist) is consistent across modality. In the audio-visual condition, participants will watch videos of actors making iconic gestures or sitting still when speaking. We want to test whether the additional gestural information will make N400 less negative at a given surprisal level.

References

Frank SL, Otten LJ, Galli G, Vigliocco G. The ERP response to the amount of information conveyed by words in sentences. Brain Lang. 2015;140:1–11.

Kutas M, Federmeier KD. 2000. Electrophysiology reveals semantic memory use in language comprehension. Trends Cogn. Sci. 4:463–70

P2.7

Managing expectations in conversations: The role of eigenlijk and inderdaad

Marlou Rasenberg 1,2 and Geertje van Bergen 2

1 Radboud University Nijmegen, The Netherlands 2 Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands The Dutch discourse particles (DPs) eigenlijk and inderdaad are highly frequent in everyday conversations and have been theoretically defined as devices that a speaker can use to

Page 40: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 40

signal (non)alignment with the hearer’s expectations (e.g., van Bergen et al., 2011), which is assumed to facilitate communication. In this research project we investigate experimentally whether DPs have a facilitating effect in predicting and processing (un)expected information during discourse comprehension.

We report the results of a cloze test in which participants completed mini-conversations. These consisted of a high-constraining context and question (aiming at a specific target word coherent with the discourse situation), followed by an incomplete answer. The results show that participants are more likely to complete the answer with the discourse-coherent target word if the answer contained inderdaad or a neutral adverb than when it contained eigenlijk – the latter DP induced the participants to consider alternative responses to the question.

The results thus show that the DPs modulate discourse-based predictions about upcoming words. Subsequently we hypothesize that eigenlijk will help the hearer to integrate unexpected information during language understanding. This will be investigated by means of an ERP experiment, of which the general outline will be presented on the poster.

van Bergen, G., van Gijn, R., Hogeweg, L., & Lestrade, S. (2011). Discourse marking and the subtle art of mind-reading: The case of Dutch eigenlijk. Journal of Pragmatics, 43(15), 3877- 892.

P2.8

Shedding light on inferential processing

Jonas Diekmann and Dietmar Roehm

University of Salzburg, Austria

While being confronted with a complex set of interrelated sentences, the human brain is in need for a set of strategies to extract the most relevant information out of the continuously unfolding stream of events. Just and Carpenter [1] were the first to investigate the influence of inferential processing on readers’ eye-movements. They reported increased reading times for less explicit verb- agent relations. In an ERP (Event-Related-Potentials) study Burkhardt [2] reported that inferential processing elicited a posterior distributed P600 effect which is currently being discussed as a correlate for the updating of a mental discourse representation [3]. To examine the relation between electrophysiological correlates of inferential processing and eye-movement data we adapted Burkhardt’s design in an eye-tracking study. Further, to extend previous research we investigated the influence of discourse prominence on inferential processing. We used a 3x2 design with three context types (CONTEXT) that introduced either a necessary (HIGH), plausible (MED) or inducible (LOW) inferential relation between a described event and an implicit actor or instrument via a manipulation of the context verb. Afterwards the implicit actor/instrument (PROMINENCE; ACTOR/INSTRUMENT) was explicitly mentioned in the target sentence.

Context sentence:

HIGH/MED/LOW: On Friday was a student shot/killed/found dead at the centre.

Target sentence:

ACTOR: The press reported, that the shooter was already arrested.

INSTRUMENT: The press reported, that the pistol was from army stocks.

Summarizing, we observed increased total reading times for discourses including more difficult inferences (HIGH vs. LOW). This effect was already observable in early regions of the presented discourses. For the critical actor/instrument in the target sentence, we found

Page 41: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 41

that less plausible inferential relations resulted in increased first-pass reading times as well as regression path durations. Thus, as readers engaged in the processing of entities that were less plausible related to a preceding context, they spent more time with regressive eye-movements while trying to maintain discourse coherence. In contrast, the prominence manipulation resulted in a spillover effect for instrument conditions. Thus, inferential linking processes might be delayed for a less prominent entity. Further, we observed increased total reading times and amount of fixations in instrument conditions as compared to actor conditions. These observations might be interpreted in terms of increased cognitive effort being necessary to integrate a less prominent entity into a comprehender’s mental representation.

References

[1] Just & Carpenter, 1978, Eye Movements and the higher psychological functions.

[2] Burkhardt, 2007, Neuroreport.

[3] Brouwer & Hoeks, 2012, Brain Research.

P2.9

How similar are word representations derived from different data sources?

Maria Montefinese and David Vinson

University College London, London, UK

Semantic representation –the knowledge that we have of the world– is an essential component of our mind. Many research efforts have been made to understand its nature, proposing that it could be based upon similarity measures –the extent to which two concepts have similar meanings- arising either from our sensorimotor experience or from any sort of regularity in spoken and/or written language. In particular, within experiential/embodied views, similarity measures can be inferred by the overlap of feature vectors for each concept pair derived from feature listing tasks (McRae et al., 2005; Montefinese et al., 2013; Vinson & Vigliocco, 2008) or property ratings (Binder et al., 2016). Similarity measures proposed in the distributional view, instead, can be inferred by the distribution of lexical co-occurrence of each concept pair across word corpora or texts (Maki et al., 2004). Similarity may be also estimated from tasks like priming, naming, word-association, and other rating tasks (De Deyne & Storms, 2008; Hutchinson et al., 2008). Here, we tested the extent to which similarity representations based on different measures converge. Firstly, using 1209 words (169 adjectives, 878 nouns, 162 verbs) from Buchanan et al.’s norms (2013), we computed similarity matrices based on 31 semantic-lexical dimensions of various types. Secondly, we computed Pearson’s correlations between each pair of similarity matrices (representational similarity analysis) and performed a multidimensional scaling analysis to explore the distribution of similarity measures in a 2-dimensional space. Results showed that similarity in age-of-acquisition and word-length were related to similarity in naming and priming. The co-occurrence-based similarity was related to the affective-based one and, interestingly, also to similarity in concreteness. More importantly, representational resemblance was shown among embodied, distributional, and association-based representations, demonstrating that different data sources are employed in a similar way in building meaningful conceptual representation. Together these results allow a better characterization of different similarity measures, providing new knowledge on the nature of semantic representation.

Page 42: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 42

References

Binder, JR et al. (2016). Cognitive Neuropsychology, 33(3-4):130-174.

Buchanan, EM et al. (2013). Behavior Research Methods, 45(3):746-757.

De Deyne, S, & Storms, G. (2008). Behavior Research Methods, 40(1), 198-205.

Hutchison, KA. et al. (2013) Behavior Research Methods, 45(4): 1099-1114.

Maki, WS et al. (2004). Behavior Research Methods, 36(3):421-431.

McRae, K et al. (2005). Research Methods, 37(4):547-559.

Montefinese, M et al. (2013). Behavior Research Methods, 45(2):440-461.

Vinson, DP & Vigliocco, G. (2008). Behavior Research Methods, 40(1):183-190.

P2.10

Predicting affective word variables

Hendrik Vankrunkelsven 1, Simon De Deyne 1,2 and Gert Storms 1

1 University of Leuven, Belgium 2 University of Adelaide, Australia

We compared two semantic models in the quality of predicting human ratings of a number of variables with an emphasis on affective variables. We used an external language model, where the meaning of words is derived from the context in which they occur in a text corpus, and an internal language model based on data from a continuous free word association task. Using these models we predict valence, arousal, and dominance (three affective variables argued to be important in semantics), and concreteness and age of acquisition (AoA). Furthermore, we also test how these models can account for the theory that abstract words make more use of affective information. We used all Dutch words that were available in the different datasets that were involved. For these 2,831 words, we predicted our variables of interest using two methods: (1) the projected location on a direction identified as the variable of interest (using property fitting) in multidimensional representations obtained by applying multi-dimensional scaling on pairwise similarities, and (2) using the mean of the k-nearest neighbors of the target item based on pairwise similarities. We used these two methods for both semantic models varying the same set of parameters, and analyses were cross-validated using a leave-one-out approach. For all affective variables and parameter values (dimensionality of MDS, and values of k), we found higher agreement between prediction and human ratings using the association model. Using the best method and parameters, we obtained correlations of .92, .85, and .85, for valence, arousal, and dominance, respectively. The corresponding correlations based on the text corpus were .80, .74, and .67, respectively. For concreteness and AoA, the highest correlations obtained from both types of data were similar, .88 and .73, respectively. After doing a median split on concreteness that separates the data in relatively concrete and relatively abstract words, the prediction of all affective variables for abstract words was better than the predictions for concrete words when using the association model. For the text corpus model, this was not the case: only the prediction of valence was better for abstract words. Predictions of arousal were a lot worse for abstract words. For dominance, there was no difference between abstract and concrete. All in all, we showed that the word association model is better at capturing affective word variables than the text corpus model we used, and that predictions based on word

Page 43: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 43

associations align with the theory that abstract words make more use of affective information.

P2.11

Homophones and their representations in the mental lexicon

Frauke Hellwig and Peter Indefrey

Heinrich-Heine-Universität Düsseldorf, Germany

Homographic homophones are argued to differ from non-homophonous words in that they share a so-called word form layer in which their common phonological representation is stored (e.g.Jescheniak, Meyer &Levelt, 2003). Evidence for this mental representation is taken from a translation task showing that low frequent (LF) homophones are faster named than in lemma frequency matched non-homophonous words. Their naming latencies rather correspond to non-homophonous words that are as frequent as the sum of the LF homophone and its high frequent (HF) twin.

On the other hand, Gahl (2008) found a systematic difference in pronunciation length between LF and HF heterographic homophonic word forms when analyzing a corpus of American English natural speech, showing that HF homophones are pronounced faster than their LF counterparts. She argues that this is an indication against partly overlapping mental representations of homophones.

To shed some more light on this debate we tried to replicate Gahl’s findings experimentally making use of the German stimuli of Jescheniak, Meyer &Levelt (2003). We added 44 homophones to the stimulus list of which the frequencies were estimated on the basis of the corpora of the Berliner Zeitung (www.dwds.de) and of SUBTLEX-DE (Brysbaert et al., 2011). Our homophones were all nouns which enabled us to embed the homophones into sentences that were virtually identical up to the homophone and could only be disambiguated after the homophone. We employed a reading retelling paradigm where subjects had the read sentences first silently and then aloud after which they had to repeat the sentences from their memory. Data analysis is still on its way, but we hope to present the results of this study at the PIF.

References

Brysbaert, M., Buchmeier, M., Conrad, M., Jacobs, A.M., Bölte, J., & Böhl, A. (2011). The word frequency effect: A review of recent developments and implications for the choice of frequency estimates in German. Experimental Psychology, 58, 412-424.

Gahl, S. (2008). Time and thyme are not homophones: The effect of lemma frequency on word durations in spontaneous speech. Language, 84, 474-496.

Jescheniak, J. D., Meyer, A. S., & Levelt, W. J. M. (2003). Specific-word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 432–38.

Page 44: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 44

P2.12 Reading storytellings in the classroom to improve the language of children with low socioeconomic background

Nathalie Thomas, Jacqueline Leybaert and Philippe Mousty (†)

Université Libre de Bruxelles, Belgium

Context. There are huge disparities between children for the acquisition of written language’s basic skills, with important consequences in terms of literacy development : in 2015, only 25% of students in the 4th primary had good reading skills in French-speaking Belgium (FW-B portal). Several risk factors can explain those reading difficulties : a language delay (Catts, Fey, Tomblin, & Zhang, 2002), a low socioeconomic background, and/or the school’s language as a second language for the child (Baye, 2010). Oral language and pre-literacy skills predict abilities in written language and can be stimulated thanks to storytelling and dialogic reading’s techniques (Justice, 2007). Language’s targets (such as vocabulary, understanding of the story, phonological awareness and print knowledge), are presented in an explicit way during dialogic reading sessions and can improve the first acquisitions in written language (Justice and Kaderavek, 2004).

Our study aims at examining whether dialogic reading sessions can improve language and pre-literacy skills of children with low economic background.

Method. We recruited 8 schools from Brussels’city center, these schools benefited from “positive discrimination”. Children were randomly assigned to the control group (N=84 from 9 classes of kindergarten), and experimental group (N=177 from 10 classes of kindergarten). The teachers of the experimental group carried on 30 sessions of dialogic reading during a period of 3 months for their preschoolers (3 sessions a week). Those sessions were aiming at stimulation of vocabulary, story understanding, phonological awareness and print references (as specifics languages targets). The control group benefited from no specific intervention. All children were individually evaluated through pre- and post-intervention by standardized language tests.

Results. Children from the experimental group showed significant differences in post-intervention for vocabulary, morphosyntax, phonological awareness and print knowledge. The improvement of vocabulary was particularly and significantly beneficial for children who present a language delay. We observe the same tendency for morphosyntax.

Conclusion. We experimented a new way of using dialogic reading techniques in the classrooms with “positive discrimination” from Brussels’city center. We demonstrated that this new intervention can positively enhance language development, including for children presenting a language delay. Dialogic reading allow them to improve their language skills and to « catch up » with their peers who do not present the same language delay. In this way, we can hope a better literacy development for these children.

Keywords: preschool, dialogic reading, language’s delay, low-income background.

P2.13

Textual quality in young university adults with and without written language difficulties

Pierre-André Patout and Alain Content

Université Libre de Bruxelles, Belgium

Research about written language impairments is essentially focused on childhood and reception-comprehension but much less on written production. In addition, most of the literature deals with spelling, capitalisation, punctuation rather than textual characteristics.

Page 45: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 45

Hence, what about other linguistic features? What about the textual level – above the more usual word and sentence levels? Can we predict the textual quality in production from usual reading assessment tools? Is there a difference between young adults with and without written language difficulties in the cohesion and coherence of their production (as criteria of textual quality)? Answering these questions is the aim of our research. We used three tasks to assess word recognition and spelling (lexical decision task, spelling choice and dictation). The participants also had to produce either (a) a narrative based on a comic strip or (b) an argumentation based on a video-taped debate or (c) a résumé of an informative text. Textual cohesion and coherence are estimated, on the one hand, through 15 criteria assessed by naive judges on Likert scales, and on the other hand with quantitative data. After measuring the inter-raters agreement, we will carry out principal component analyses in order to identify main factors and use a multiple regression to determine which measures taken from the reading assessment tools are relevant to predict textual quality.

Keywords: Textual Quality, Written Language Difficulties, Written Poduction

P2.14

Eliciting connectivity: Productions in editing tasks, in three conditions

Robert Maier

University of Augsburg, Germany

The empirical study of discourse places researchers at a peculiar conundrum: while observational field methods may find themselves challenged to provide statistically reliable insights into infrequent phenomena, targeted experimental methods promise better focus, but must be considered doubtful in terms of environmental validity (cf. criticisms of non-corpus methods in Jucker, 2009.)

This talk reports on a study of editing-based tasks (EBTs) as a methodological tool to facilitate elicitation of discourse connectivity, in particular realization of discourse relations (DRs) as explicit (overt), implicit, or a combination of both (see Speyer & Fetzer, 2014; Maier, Hofmockel & Fetzer, 2016). Native speakers of English were provided with a set sequence of discourse units without overt DR marker, and expanded the linguistic material to arrive at a well-formed text of a given genre.

For the study reported here, a comparison is made between results from three different implementations of the EBT task. Participants might carry out the task:

(1)silently on their own;

(2)on their own as above, but following a think-aloud protocol; or

(3)in pairs, interacting to co-construct a solution by aligning their individual representations and integrating them into one.

These approaches differ not only with regard to ease of implementation, but also with regard to the data provided, as each brings a different combination of familiarity and artificialness of task. Depending on implementation, individual DRs are realized overtly to different degrees. This presentation reports pilot data from three different instances of each implementation, and discusses methodological issues and further potential of the findings reported.

Page 46: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 46

References

Jucker, A H (2009). Speech act research between armchair, field and laboratory: The case of compliments. Journal of Pragmatics, 41(8): 1611-35.

Speyer, A; Fetzer, A (2014). The coding of discourse relations in English and German argumentative discourse. In: H Gruber & G Redeker (eds), The pragmatics of discourse coherence: Theories and applications, pp.87-119. Amsterdam: Benjamins.

Page 47: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 47

LIST OF PARTICIPANTS

Fatemeh ABDOLLAHI Pennsylvania State University, USA [email protected] Simona AMENTA Ghent University, Belgium [email protected] Sarah BERNOLET University of Antwerp, Belgium [email protected] Tiffany BOERSMA University of Amsterdam, The Netherlands [email protected] Nicolas BOURGUIGNON Ghent University, Belgium [email protected] Emeline BOURSIN Université Libre de Bruxelles, Belgium [email protected] Wouter BROOS Ghent University, Belgium [email protected] Marc BRYSBAERT Ghent University, Belgium [email protected] Hakan CANGIR Ankara University, Turkey [email protected] Giovanni CASSANI University of Antwerp, Belgium [email protected] Fabienne CHETAIL Université Libre de Bruxelles, Belgium [email protected] Simon DE DEYNE University of Adelaide, Australia University of Leuven, Belgium [email protected] Jonas DIEKMANN University of Salzburg, Austria [email protected] Aster DIJKGRAAF Ghent University, Belgium [email protected]

Nicolas DIRIX Ghent University, Belgium [email protected] Farah DJALAL University of Leuven, Belgium [email protected] Manuel DUPONT Université de Liège, Belgium [email protected] Wouter DUYCK Ghent University, Belgium [email protected] Carole EL AKIKI Université Libre de Bruxelles, Belgium [email protected] Amie FAIRS Max Planck Institute for Psycholinguistics, The Netherlands [email protected] Alice FOUCART Ghent University, Belgium [email protected] Matthias FRANKEN Radboud University Nijmegen, The Netherlands [email protected] Kristina GEERAERT University of Leuven, Belgium [email protected] Toivo GLATZ University of Groningen, The Netherlands [email protected] Fritz GÜNTHER University of Tübingen, Germany [email protected] Robert HARTSUIKER Ghent University, Belgium [email protected] Frauke HELLWIG Heinrich-Heine-Universität Düsseldorf, Germany [email protected] Tom HEYMAN University of Leuven, Belgium [email protected]

Page 48: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 48

Toru HITOMI Ghent University, Belgium [email protected] Sara IACOZZA Max Planck Institute for Psycholinguistics, The Netherlands [email protected] Eva KOCH Vrije Universiteit Brussel, Belgium [email protected] Jana KRUTWIG Donders Institute Nijmegen, The Netherlands [email protected] Hernán LABBÉ GRÜNBERG Universiteit van Amsterdam, The Netherlands [email protected] Jacqueline LEYBAERT Université Libre de Bruxelles, Belgium [email protected] Andreas LIND Ghent University, Belgium [email protected] Max LOUWERSE Tilburg University, The Netherlands [email protected] Robert M. MAIER Universität Augsburg, Germany [email protected] Marco MARELLI University of Milano-Bicocca, Italy [email protected] Alexander McALLISTER Pennsylvania State University, USA [email protected] Karen MEERSMANS University of Leuven, Belgium [email protected] Antje MEYER Max Planck Institute for Psycholinguistics, The Netherlands [email protected] Maria MONTEFINESE University College London, UK [email protected]

Merel MUYLLE Ghent University, Belgium [email protected] Pierre-André PATOUT Université Libre de Bruxelles, Belgium [email protected] Marko PERIC University of Belgrade, Serbia [email protected] Dirk PIJPOPS University of Leuven, Belgium [email protected] Lewis POLLOCK University College London, UK [email protected] Tamara POPOVIC University of Belgrade, Serbia [email protected] Marlou RASENBERG Max Planck Institute for Psycholinguistics & Radboud University Nijmegen, The Netherlands [email protected] Laura ROSSEEL University of Leuven, Belgium [email protected] Armand ROTARU University College London, UK [email protected] Aida SALCIC University of Groningen, The Netherlands [email protected] Dominiek SANDRA University of Antwerp, Belgium [email protected] Karinne SAUVAL Université Libre de Bruxelles, Belgium [email protected] Eleonore SMALLE Université Catholique de Louvain, Belgium [email protected] Tessa SPÄTGENS University of Amsterdam, The Netherlands [email protected]

Page 49: ABSTRACTS KEYNOTE SPEAKERS Towards understanding ... · ABSTRACTS KEYNOTE SPEAKERS Towards understanding conversation: A psycholinguist's perspective Antje S. Meyer ... and Torreira's

PIF 2017, Leuven, May 29-30, pg. 49

Gert STORMS University of Leuven, Belgium [email protected] Arnaud SZMALEC Université Catholique de Louvain, Belgium [email protected] Miki TANAKA Konan W. University, Japan [email protected] Nathalie THOMAS Université Libre de Bruxelles, Belgium [email protected] Brendan TOMOSCHUK Ghent University, Belgium [email protected] Wim TOPS University of Groningen, The Netherlands [email protected] Antony Scott TROTTER Lancaster University, UK [email protected] Stephan TULKENS University of Antwerp, Belgium [email protected] Bert VANDENBERGHE University of Leuven, Belgium [email protected] Eowyn VAN DE PUTTE Ghent University, Belgium [email protected] Heleen VANDER BEKEN Ghent University, Belgium [email protected] Lise VAN DER HAEGEN Ghent University, Belgium [email protected]

Lore VANDEVOORDE Ghent University, Belgium

[email protected] Hendrik VANKRUNKELSVEN University of Leuven, Belgium [email protected] Merel VAN WITTELOOSTUIJN University of Amsterdam, The Netherlands [email protected] Camille VIDAL Université Libre de Bruxelles, Belgium [email protected] Gabriella VIGLIOCCO University College London, UK [email protected] Cesko VOETEN Leiden University, The Netherlands [email protected] Wouter VOORSPOELS University of Leuven, Belgium [email protected] Anne WHITE University of Leuven, Belgium [email protected] Evy WOUMANS Ghent University, Belgium [email protected] Chi ZHANG Ghent University, Belgium [email protected] Ye ZHANG University College London, UK [email protected]