Auditory stimulus equivalence and non arbitrary relations. (1).pdf

18

Click here to load reader

Transcript of Auditory stimulus equivalence and non arbitrary relations. (1).pdf

  • The Psychological Record, 2013, 63, 409426

    Correspondence concerning this article should be addressed to Ian Stewart, School of Psychology, St. Anthonys Building, National University of Ireland, Galway; E- mail: [email protected]

    DOI:10.11133/j.tpr.2013.63.3.001

    Auditory StimuluS EquivAlEncE And non-ArbitrAry rElAtionS

    Ian Stewart and Niamh LavelleNational University of Ireland, Galway

    This study extended previous research on stimulus equivalence with all auditory stimuli by using a methodology more similar to conventional match-to-sample training and testing for three 3-member equivalence relations. In addition, it examined the effect of conflicting non-arbitrary relations on auditory equivalence. Three conditions (n = 11 participants each) were trained and tested for formation of equivalence using recorded auditory nonsense syllable stimuli. In the Same Voice (SV) condition, participants were exposed to stimuli pronounced by the same voice in training and testing. For the Different Voice Test (DVT) condition, in training, stimuli were all pronounced by the same voice, while in testing, they were pronounced by three different voices, with the sample always in a different voice from the equivalent comparisons. This established potentially competing sources of stimulus control, since participants might respond either in accordance with non-arbitrary auditory relations or with equivalence. In the third condition (Different Voice; DV), participants were given testing identical to the DVT condition but were trained with stimuli pronounced by different voices, such that voice was unrelated to the programmed contingencies. As predicted, the DVT condition produced less equivalence responding and more non-arbitrary matching than the DV condition. These data are broadly consistent with previous findings with visual stimuli.Key words: stimulus equivalence, auditory stimuli, non-arbitrary relations, interference, humans

    Stimulus equivalence is perhaps the most well known example of the phenomenon of derived or emergent stimulus relations. In a typical stimulus equivalence preparation, match-to-sample (MTS) training in a series of appropriately related conditional discriminations results in a set of derived performances characterized by reflexivity, symmetry, and transitivity (Sidman & Tailby, 1982). To pass tests of reflexivity, the participant is required to conditionally relate each stimulus to itself (e.g., by selecting comparison A in the presence of sample A). Symmetry refers to the functional reversibility of conditional discriminations (e.g., if the selection of comparison B, given A as sample, is taught, then the selection of A as a comparison, given B as sample, is derived). Transitivity refers to the combination of taught relations (e.g., if the selection of B, given A, and C, given B, are taught, then the selection of C, given A, is derived). A performance that combines all three patterns is held to indicate a relation of equivalence between the three stimuli.

  • 410 Stewart and LaveLLe

    Stimulus equivalence and derived relations more generally have been extensively studied by behavior analytic researchers. One of the main reasons for this is that they appear to be closely linked with human language. For instance, while verbally able humans readily show derived relations, studies with non-humans have failed to produce robust and unequivocal demonstrations of this phenomenon (e.g., Dugdale & Lowe, 2000; Sidman, Rauzin, Lazar, Cunningham, Tailby, & Carrigan, 1982; see also Schusterman & Kastak, 1998). In human participants, the ability to show derived relations has been shown to correlate with language ability (e.g., Barnes, McCullagh, & Keenan, 1990; Devany, Hayes, & Nelson, 1986; OHora, Pelaez, & Barnes-Holmes, 2005). In addition, empirical effects produced by language-based tasks (e.g., semantic priming) are paralleled by tasks involving derived relations (e.g., Barnes-Holmes et al., 2005; Bissett & Hayes, 1988).

    Given the close link between equivalence and human language, it might seem important to investigate this phenomenon using auditory stimuli. In fact, however, there has been much less research into stimulus equivalence with auditory stimuli than with visual stimuli. A number of studies have included both auditory and visual stimuli in the conditional discrimination training used to establish equivalence, including the very first empirical demonstration of the latter by Sidman (1971), amongst others (e.g., Almeida-Verdu et al., 2008; De Rose, De Souza, & Hanna, 1996; Gast, VanBiervliet, & Spradlin, 1979; Green, 1990; Groskreutz, Karsina, Miguel, & Groskreutz, 2010; Kelly, Green, & Sidman, 1998; Sidman & Cresson, 1973; Smeets & Barnes-Holmes, 2005; Ward & Yu, 2000). However, with respect to the investigation of equivalence using solely auditory stimuli, there is only one example thus far.

    The study in question is Dube, Green, and Serna (1993). In this study, participants were trained in A-B and A-C conditional discriminations using a two-comparison MTS preparation and subsequently tested for the derivation of B-C and C-B derived relations. The computer-based protocol for assessing trained and derived relations was slightly non-standard, which the authors argued was necessary due to the use of auditory stimuli (recordings of spoken nonsense syllables, e.g., cug, zid). On each trial, prior to the presentation of each of the auditory stimuli, a round, white spot would appear on the center of the screen. After the participant had touched this, it disappeared and the auditory sample stimulus was presented. First the sample was presented and then one of the two comparisons, followed by a similar second presentation of the sample, followed by the other comparison. The order of presentation of the two comparisons was counterbalanced across trials. In addition, as each comparison was presented, a grey rectangle would appear briefly in either the upper left or upper right of the screen, and after the second comparison had been presented, both rectangles would appear on screen together and remain on screen until the participants responded by touching one. The results were that six out of seven participants acquired the conditional discrimination baseline and four demonstrated the formation of two 3-member (A-B-C) equivalence relations. Four participants received additional training and subsequently demonstrated extension of the relations from three to four members.

    As the first study to demonstrate stimulus equivalence with exclusively auditory stimuli, Dube et al. (1993) was an important step forward. One of the aims of the current study was to extend this work in accordance with the recommendations of Dube et al. by showing auditory stimulus equivalence with more than two comparisons. In order to achieve this aim, this study employed an alternative format for training and testing derived equivalence that was arguably closer to conventional MTS procedures in certain respects. Most notably, in the current procedure, all the auditory stimuli on each trial, including the sample and each of the three comparisons, was associated with an on-screen button that appeared in a visual array similar to the array of stimuli seen in a conventional all-visual MTS task. A second aim of the present study was to extend the auditory equivalence paradigm in another potentially useful direction by investigating the effect on derived equivalence in the auditory domain of a potentially competing source of stimulus control based on non-arbitrary (physical) relations.

  • 411Auditory EquivAlEncE And non-ArbitrAry rElAtions

    Previous research demonstrated this effect (referred to as non-arbitrary interference) with equivalence relations using visual stimuli (Stewart, Barnes-Holmes, Roche, & Smeets, 2002). The concept underlying this work was based on the relational frame theory (RFT; Hayes, Barnes-Holmes, & Roche, 2001) distinction between non-arbitrary relational responding in which stimuli are related based on their physical properties (e.g., identity matching) and arbitrarily applicable relational responding, in which stimuli are related in a particular way based on control by stimuli that lie outside the relata. An example of the latter is equivalence, which RFT argues is made more likely by certain aspects of the training and testing context to which participants are exposed previously in their pre-experimental histories. Arbitrarily applicable relational responding, the nature and origins of which are described in detail elsewhere (see, e.g., Stewart & McElwee, 2009), is proposed by RFT advocates to be a form of responding that only humans appear to learn to an advanced degree. Furthermore, it is argued that this uniquely human repertoire is what enables humans alone to readily demonstrate not simply stimulus equivalence but in fact all forms of complex language, and indeed this is how RFT explains the link between these two phenomena.

    Relational frame theorists suggest that although arbitrarily applicable relational responding is more abstract and complex than non-arbitrary relational responding, learning the former is probably based to an important extent on learning the latter. For example, humans almost certainly learn to match physically similar things before they become able to respond in accordance with an abstract relation of sameness between physically dissimilar stimuli, such as happens in stimulus equivalence, and the former may well facilitate the development of the latter. In fact, RFT-based research procedures in which non-arbitrary relational training is used to establish contextual cues that later control abstract arbitrarily applicable relations between stimuli (see, e.g., Steele & Hayes, 1991) are based on this theoretical relationship. Given the latter, however, it is also possible that under certain circumstances, non-arbitrary relational stimulus control might compete with and make arbitrarily applicable relational stimulus control less likely. This was the rationale underlying Stewart et al. (2002).

    Stewart et al. (2002) investigated whether conflicting non-arbitrary color relations could interfere with equivalence relations. This study involved three conditions (n = 8 participants per condition), each of whom was trained and tested for the formation of three 3-member equivalence relations using three-letter (CVC) nonsense syllables as stimuli. In the No Color condition, all stimuli were in black lettering (the background was white for all three conditions). The other two conditions (i.e., All Color and Color Test) were identical to the No Color condition in all respects except that some or all of the nonsense syllable stimuli they employed appeared in a variety of colors (red, blue, and green). Furthermore, during the testing phase for both these conditions, the sample stimulus was always a different color from the experimenter-designated correct (i.e., equivalent) comparison and always the same color as one of the incorrect (non-equivalent) comparisons; hence, for both these conditions, testing trials represented a conflict between non-arbitrary color relations and equivalence relations as potential sources of stimulus control. However, in the All Color condition, the stimuli during the training phase were also in color. The difference was that during training there was no consistent relationship (either congruency or lack of it) between the trained arbitrary relations and the color match relations. As a result, it was predicted that participants would learn to ignore color as an aspect of the format and thus would show roughly comparable levels of equivalence to the No Color group. Participants in the Color Test condition, in contrast, were trained with black lettered nonsense syllables, and thus it was predicted that there would be maximal conflict between non-arbitrary and arbitrary relational stimulus control for this group.

    The results of the study were consistent with these predictions in that, while the No Color and All Color conditions showed roughly comparable mean levels of equivalence responding and equal numbers of participants who passed the equivalence test (i.e., three per condition), the mean levels of equivalence in the Color Test condition were significantly

  • 412 Stewart and LaveLLe

    lower, and there were no equivalence passes. At the same time, mean levels of color matching were significantly higher for the Color Test condition than for the other two conditions, indicating that participants in this condition were showing lower levels of equivalence than those in the other conditions because participants in the former were tending to color match more than those in the latter. These results supported the RFT predictions of non-arbitrary relational interference with equivalence responding.

    Apart from testing for stimulus equivalence with auditory stimuli with an alternative protocol and using three comparisons in the training/testing format, the current study also examined whether the non-arbitrary interference with equivalence effect shown by Stewart et al. (2002) in the context of visual stimuli might also occur in the context of auditory stimuli. Similar to the Stewart et al. study, three conditions were trained and tested for the formation of three 3-member equivalence relations. However, of course, in this study, the stimuli were all auditory stimuli, and more specifically, they were recordings of spoken nonsense syllables as used by Dube et al. (1993). In one condition (Same Voice), all nonsense syllables in both training and testing were spoken by the same voice, and thus no conflict was expected. However, the other two conditions were exposed to testing in which the nonsense syllables were spoken by varying types of voice and in which the correct (i.e., equivalent) comparison stimulus was always produced by a different voice from that producing the sample stimulus, while one of the incorrect (i.e., non-equivalent) comparison stimuli was produced by the same voice as the sample. One of the conditions (Different Voice) also received training with varying voices, and thus participants in this condition were trained to ignore the non-arbitrary relation of voice. Hence, during testing, the voices of the stimuli were predicted to have little or no impact on equivalence performance. However, in the other condition (Different Voice Test), participants were trained with the stimuli all being spoken in the same voice. Thus, these participants were not exposed to reinforcement for ignoring voice during training procedures. Thus, it was predicted that the level of interference in equivalence responding would be highest for participants in this condition. It was also predicted that matching would be highest for these participants. This study aimed to examine interference with auditory stimulus equivalence analogous to the way in which Stewart et al. (2002) examined interference with visual stimulus equivalence, and thus the foregoing predictions were based on the patterns observed in the Stewart et al. study. However, given the difference in the modality of the stimuli employed, it was expected that there might also be some differences between the two studies.

    methodParticipants

    Participants were 33 undergraduate students attending the National University of Ireland, Galway, with a mean age of 21.3 years (SD = 5.6) who volunteered to take part in the study in exchange for course credit. Informed consent was appropriately obtained from each participant (see the Procedure section). Participants were randomly assigned to one of three experimental conditions (i.e., 11 per condition).

    ApparatusStimulus presentation and recording of responses were controlled by custom software

    programmed in Visual Basic and presented on an Advent K200 laptop. Participants were provided with headphones to hear the auditory experimental stimuli, which included the following nine spoken nonsense syllables: ZID (A1), MAU (B1), JOM (C1), VEK (A2), WUG (B2), BIF (C2), YIM (A3), DAX (B3), and PUK (C3). The alphanumeric labels accompanying each nonsense syllable are used here for ease of communication. Participants were unaware of these labels.

    The Visual Basic program used for the experiment drew on pre-recorded sound files to produce the auditory stimuli employed during the protocol. The sound files were

  • 413Auditory EquivAlEncE And non-ArbitrAry rElAtions

    pre-recordings of each of the nine nonsense syllables listed above spoken in each of three different voices: an adult male voice, an adult female voice, and a female childs voice. Whenever a sample or comparison button was clicked (within pre-designated parameters; see the Procedure section), a particular pre-recording would be played by the program. In the Same Voice condition, all nonsense syllable stimuli were presented in the childs voice. In the Different Voice condition, during training and during testing, nonsense syllables were presented in all three pre-recorded voices. In the training phase of the Different Voice Test condition, all nonsense syllable stimuli were presented in the childs voice, while in the testing phase, all nonsense syllable stimuli were presented in all three pre-recorded voices.

    ProcedureGeneral. At the beginning of the procedure, participants were seated at a desk in a

    small experimental room facing the laptop computer and were provided with a typed information sheet and a typed consent form to sign. After they signed the consent form, the experimental procedure proper could begin.

    Each participant was exposed to two separate sessions of training and testing (see Table 1). At the start of the procedure, the following instructions appeared on the computer screen:

    In the following trial, you will see a button at the top of the screen. When you click this button using the mouse, you will hear a nonsense word and you will see three further buttons appear at the bottom of the screen, and hear three further nonsense words. You need to choose one of the three nonsense words by clicking on one of the three buttons at the bottom. In the first part of the experiment, the computer will always tell you whether your choice was correct or wrong. In the latter part of the experiment, however, the computer will no longer provide you with feedback. Click START to begin.

    Figure 1 shows the auditory successive conditional discrimination protocol for the MTS trials used in both training and testing. On each trial, a red button first appeared in the top center of the screen. When the participant clicked on it, a black box appeared around it for 0.5 s and an auditory stimulus, namely, a recording of a spoken nonsense syllable, was presented. After this, three additional buttons appeared in order from left to right along the lower half of the screen. The first button appeared 1.5 s after the sample was presented, a black box surrounded it for 0.5 s, and an auditory stimulus (spoken nonsense syllable) was presented. A second button appeared 1 s later, a black box surrounded this button for 0.5 s, and an auditory stimulus (spoken nonsense syllable) was presented. Finally, 1 s later again, a third button appeared, a black box surrounded it for 0.5 s, and an auditory stimulus (spoken nonsense syllable) was presented.

    Figure 1. The auditory successive conditional discrimination protocol.

  • 414 Stewart and LaveLLe

    Until all three comparison buttons had appeared on the screen and the auditory stimuli had been presented, a click on any of the on-screen buttons produced no further effect. From that point on, a click on one of the three comparisons produced further effects. First, a black box appeared around that particular button again, but this time no auditory stimulus was presented. In addition, if this was training, then appropriate feedback was provided. If a correct response was made, the stimulus display cleared and the word correct appeared in the center of the screen for 0.5 s accompanied by an auditory tone (i.e., a beep). If an incorrect response was made, the display cleared and the word wrong appeared in the center of the screen for 0.5 s accompanied by a different auditory tone (i.e., a buzzing sound). Then, after an intertrial interval (ITI) of 1 s, the next trial began. During testing, no feedback was provided and the ITI began immediately.

    training. During the training phase, the format for the experimental stimuli was as follows.

    Same Voice group: Same voice for all stimuli Different Voice group: Different voices across stimuli Different Voice Test group: Same Voice for all stimuli

    During this phase of the experiment, participants were trained on three A-B and three B-C MTS tasks. For the three A-B tasks, participants were presented with A1, A2, or A3 as the sample stimulus and B1, B2, and B3 as auditory comparisons. A correct response was B1 given A1, B2 given A2, and B3 given A3. For the three B-C tasks, participants were presented with B1, B2, or B3 as the sample and C1, C2, and C3 as comparisons. A correct response was C1 given B1, C2 given B2, and C3 given B3.

    Tasks were presented in a repeating cycle of 36 trials, the order of which was the same for every participant (see Appendix A). First, the three A-B tasks were presented six times each in a quasi-randomly ordered block of 18 trials; the three B-C tasks were then presented six times each in another quasi-randomly ordered block of 18 trials. Across both of these blocks, each of the following elements was counterbalanced: (a) the order of presentation of the three A-B, and then the three B-C MTS tasks; (b) the spatial positioning of the buttons whose appearance accompanied particular comparison auditory stimuli (left, middle, or right); and (c) the spatial positioning of the button that accompanied the experimenter-designated correct match (left, middle, or right). In the case of the Different Voice condition, one extra element of counterbalancing was includedthe spatial positioning of the buttons whose appearance accompanied particular voice types, so that no one particular voice would be associated with any one particular position. In addition, across one third of the trials, as would be expected by chance, the correct match was presented in the same voice as the sample stimulus, while on the remaining trials the correct comparison was presented in a different voice from the sample.

    When participants had responded correctly on 36 consecutive MTS training trials (which could be achieved at any point during a training block) the testing phase for that session began.

    testing. Participants first read the following instructions:

    During this phase the computer will no longer give you feedback.

    The testing phase then began. The format for the experimental stimuli during the testing phase was as follows.

    Same Voice group: Same voice for all stimuli Different Voice group: Different voices across stimuli Different Voice Test group: Different voices across stimuli

    In this phase of the experiment, participants were tested on the three C-A MTS tasks. In these tasks, participants were presented with C1, C2, or C3 as the sample stimulus and had to choose from among the three comparison stimuli A1, A2, and A3. Responding in

  • 415Auditory EquivAlEncE And non-ArbitrAry rElAtions

    accordance with an equivalence relation was defined as choosing A1 given C1, A2 given C2, and A3 given C3. A response rate of 32/36 (89% or approximately 9/10) was used as the criterion for passing equivalence.

    Each of the three C-A MTS tasks was presented 12 times in one quasi-randomly ordered block of 36 trials (see Appendix B). The predetermined quasi-random order of presentation was the same for every participant. Each of the following elements was counterbalanced: (a) the order of presentation of the three MTS tasks; (b) the spatial positioning of the buttons whose appearance accompanied particular comparison auditory stimuli (left, middle, or right); and (c) the spatial positioning of the button that accompanied the experimenter-designated correct match (left, middle, or right). In the case of the Different Voice and Different Voice Test groups, which presented different voices during testing, one extra element of counterbalancing was includedthe spatial positioning of the buttons whose appearance accompanied particular voice types, so that no one particular voice would be associated with any one particular position. In addition, in both conditions, the correct comparison stimulus choice, in terms of the equivalence relation, was never the same voice as the sample stimulus.

    At the end of each experimental session, the following message appeared on screen:

    Thank youthats the end of that part of the experiment. Please call the experimenter.

    If this was the participants first experimental session, he or she was exposed to a second session, either immediately or within 48 hr of the first exposure. During the second session, the participant was exposed to exactly the same training and testing procedures. After the second session, the participant was thanked and debriefed.

    resultstraining

    Table 1 provides an overview of individual and average results by both condition and exposure for both training (number of trials required to meet criterion) and testing (numbers of equivalence and matching responses, respectively). The mean number of training trials required during the first exposure was 108.73 (SD = 46.95) for the Same Voice group, 155.45 (SD = 67.50) for the Different Voice group, and 138.73 (SD = 69.90) for the Different Voice Test group. The corresponding figures for the second exposure were 51.10 (SD = 23.31), 53.18 (SD = 14.93), and 52.10 (SD = 27.97), respectively (see Figure 2). A 3 2 repeated measures analysis of variance (ANOVA) revealed a highly significant main effect for exposure, F(1, 30) = 47.74, p < 0.0001, partial 2 = 0.61. However, there were no significant differences between the conditions, and there was no interaction effect. A Pearsons productmoment correlation test was conducted to check for a possible correlation between number of training trials required in the first exposure and the number of equivalence responses in either the first or the second exposure to testing. There were no significant correlations in either case (Exposure 1: r = .079, p = .663; Exposure 2: r = .049, p = .786). Overall, then, these results indicate the absence of significant differences between the conditions with respect to number of training trials required to reach criterion, and that the quantity of training trials received did not systematically affect equivalence performance.

    testingEquivalence responding. The test data were first analyzed in terms of the number of

    responses emitted by participants in each condition that were in accordance with the designated equivalence relations (defined hereafter as correct responses; see Table 1). In the Same Voice condition, four individuals passed equivalence by showing 32/36 (89%) or more correct responses in Exposure 1, and seven did so in Exposure 2. In the Different

  • 416 Stewart and LaveLLe

    Tabl

    e 1

    Num

    bers

    of T

    rain

    ing

    Tria

    ls a

    nd N

    umbe

    rs a

    nd P

    erce

    ntag

    es o

    f Cor

    rect

    (Equ

    ival

    ent)

    Test

    ing

    Tria

    ls fo

    r Ind

    ivid

    ual P

    artic

    ipan

    ts a

    nd M

    eans

    and

    S

    tand

    ard

    Dev

    iatio

    ns fo

    r Eac

    h of

    the

    Thre

    e C

    ondi

    tions

    Acr

    oss

    Bot

    h E

    xpos

    ures

    Cond

    ition

    P1P2

    P3P4

    P5P6

    P7P8

    P9P1

    0P1

    1M

    (SD)

    Trai

    ning

    Expo

    sure

    1SV

    6511

    180

    224

    101

    9114

    814

    172

    9271

    108.

    7 (4

    6.9)

    DV75

    254

    170

    117

    255

    150

    240

    116

    9515

    682

    155.

    5 (6

    7.5)

    DVT

    7620

    625

    310

    920

    422

    460

    8513

    895

    7613

    8.7

    (69.

    9)Ex

    posu

    re 2

    SV97

    6336

    3636

    3690

    3660

    3636

    51.

    1 (2

    3.3)

    DV57

    6752

    3636

    5668

    7368

    3636

    53.

    2 (1

    4.9)

    DVT

    114

    3636

    3689

    3636

    4036

    7836

    52.

    1 (2

    8.0)

    Test

    ing

    (Equ

    ival

    ence

    )Ex

    posu

    re 1

    SV26

    (72%

    )29

    (81%

    )18

    (50%

    )7

    (19%

    )12

    (33%

    )34

    (94%

    )34

    (94%

    )11

    (31%

    )32

    (89%

    )33

    (92%

    )29

    (81%

    ) 2

    4.1

    (10.

    2)DV

    26 (7

    2%)

    12 (3

    3%)

    13 (3

    6%)

    4 (1

    1%)

    32 (8

    9%)

    13 (3

    6%)

    10 (2

    8%)

    34 (9

    4%)

    6 (1

    7%)

    14 (3

    9%)

    28 (7

    8%)

    17.

    5 (1

    0.6)

    DVT

    13 (3

    6%)

    7 (1

    9%)

    6 (1

    7%)

    5 (1

    4%)

    2 (6

    %)

    11 (3

    1%)

    9 (2

    5%)

    11 (3

    1%)

    15 (4

    2%)

    4 (1

    1%)

    10 (2

    8%)

    8.

    5 (4

    .0)

    Expo

    sure

    2SV

    29 (8

    1%)

    35 (9

    7%)

    36 (1

    00%

    )1

    (3%

    )12

    (33%

    )35

    (97%

    )34

    (94%

    )24

    (67%

    )33

    (92%

    )36

    (100

    %)

    33 (9

    2%)

    28

    (11.

    5)DV

    25 (6

    9%)

    23 (6

    4%)

    12 (3

    3%)

    1 (3

    %)

    34 (9

    4%)

    13 (3

    6%)

    10 (2

    8%)

    36 (1

    00%

    )6

    (17%

    )14

    (39%

    )32

    (89%

    ) 1

    8.7

    (11.

    9)DV

    T13

    (36%

    )0

    (0%

    )13

    (36%

    )0

    (0%

    )2

    (6%

    )10

    (28%

    )10

    (28%

    )12

    (33%

    )18

    (50%

    )4

    (11%

    )6

    (17%

    )

    8 (

    6.0)

    Test

    ing

    (Mat

    chin

    g)Ex

    posu

    re 1

    DV6

    (17%

    )17

    (47%

    )11

    (31%

    )19

    (53%

    )1

    (3%

    )9

    (25%

    )9

    (25%

    )2

    (6%

    )18

    (50%

    )9

    (25%

    )3

    (8%

    )

    9.5

    (6.2

    )DV

    T15

    (42%

    )14

    (39%

    )22

    (61%

    )14

    (39%

    )3

    (8%

    )8

    (22%

    )22

    (61%

    )13

    (36%

    )19

    (53%

    )29

    (81%

    )23

    (64%

    ) 1

    6.5

    (7.4

    )Ex

    posu

    re 2

    DV5

    (14%

    )5

    (14%

    )12

    (33%

    )19

    (53%

    )2

    (6%

    )11

    (31%

    )9

    (25%

    )0

    (0%

    )14

    (39%

    )10

    (28%

    )0

    (0%

    )

    7.8

    (6.2

    )DV

    T12

    (33%

    )16

    (44%

    )22

    (61%

    )18

    (50%

    )1

    (3%

    )14

    (39%

    )20

    (56%

    )11

    (31%

    )16

    (44%

    )30

    (83%

    )23

    (46%

    ) 1

    6.6

    (7.5

    )

    Not

    e. S

    V =

    Sam

    e V

    oice

    con

    ditio

    n; D

    V =

    Diff

    eren

    t Voi

    ce c

    ondi

    tion;

    DV

    T =

    Diff

    eren

    t Voi

    ce T

    est c

    ondi

    tion.

  • 417Auditory EquivAlEncE And non-ArbitrAry rElAtions

    Voice condition, the corresponding figures were two and three, whereas in the Different Voice Test condition, no participant showed this level of equivalence responding. In fact, in the Different Voice Test condition, the highest achieved in either exposure was 50%, and even in the second exposure, two people recorded a score of 0 and a third recorded a score of just 2.

    The mean numbers of correct (equivalence) responses in the first exposure were 24.10 (SD = 10.2) for the Same Voice group, 17.5 (SD = 10.6) for the Different Voice group, and 8.5 (SD = 4.0) for the Different Voice Test group. The corresponding figures for the second exposure were 28.0 (SD = 11.5) for the Same Voice group, 18.7 (SD = 11.9) for the Different Voice group, and 8.0 (SD = 5.98) for the Different Voice Test group (see Figure 3). A 3 2 mixed repeated measures ANOVA, with the three voice conditions as Factor 1 and the two test exposures as Factor 2, showed a highly significant main effect for condition, F(2, 30) = 10.49, p < 0.0001, partial 2 = 0.41, but not for exposures. No significant interaction effect was identified. Post-hoc analyses (Fisher PLSD) revealed significant differences in mean equivalence responses between the Different Voice Test conditions and both the Same Voice (p < 0.0001) and Different Voice conditions (p = .017). The difference in mean equivalence responses for the Same Voice and Different Voice conditions was nonsignificant, but only marginally so (p = .051).

    matching. Another area for analysis was the extent of non-arbitrary relational (matching) type responding by participants in the Different Voice and Different Voice Test conditions (See Table 1 and Figure 4). For the first exposure, mean levels of matching-type responses were 9.5 (SD = 6.2) for the Different Voice condition and 16.6 (SD = 7.4) for the Different Voice Test condition. The corresponding means for the second exposure were 7.8 (SD = 6.2) for the Different Voice condition and 16.6 (SD = 7.5) for the Different Voice Test condition. Level of matching responses for the Different Voice and Different Voice Test conditions were analyzed using a 2 2 mixed repeated measures ANOVA, with voice groups as Factor A and exposures as Factor B. This analysis showed a significant effect for condition, F(1, 20) = 7.792, p = .0113, partial 2 = 0.25, with no significant effect for exposures (Factor B) and no interaction effect.

    discussionOne key aim of the current study was to extend previous work by Dube et al. (1993) on

    stimulus equivalence with auditory stimuli. As discussed earlier, research into stimulus equivalence has typically used visual stimuli exclusively, with relatively less work conducted that has included auditory stimuli as well, and only one study thus far (i.e.,

    Figure 2. Means and standard deviations for number of training trials required for Exposures 1 and 2 across the three conditions, Different Voice (dv), Different Voice Test (dvt), and Same Voice (sv).

    0

    25

    50

    75

    100

    125

    150

    175

    200

    225

    250

    Mea

    n N

    umbe

    r Tria

    ls R

    equi

    red

    Exp 1 Exp 2Exposure

    sv

    dvt

    dv

    Training

  • 418 Stewart and LaveLLe

    Dube et al., 1993) that has used solely auditory stimuli. However, derived relations such as equivalence are seen by many behavior analytic researchers as providing a model of language, and everyday language often involves either exclusively or nearly exclusively auditory stimuli. Hence, if the link between derived relations and language is to be comprehensively explored, then it would appear important that more attention be paid to the generation of derived relations based on auditory stimuli alone. Dube et al. (1993) provided an initial demonstration of stimulus equivalence using auditory stimuli. The current study replicates the Dube et al. effect by providing a similar demonstration. The

    Figure 4. Means and standard deviations for number of responses in accordance with the non- arbitrary relation of voice per individual exposure for the Different Voice (dv) and Different Voice Test (dvt) conditions.

    Figure 3. Means and standard deviations for number of responses in accordance with equivalence for Exposures 1 and 2 across the Different Voice (dv), Different Voice Test (dvt), and Same Voice (sv) conditions. The dashed line indicates the maximum score possible on the equivalence test.

    0

    5

    10

    15

    20

    25

    30

    35

    40

    45

    Mea

    n N

    umbe

    r of E

    quiv

    alen

    ce R

    espo

    nses

    Exp 1 Exp 2Exposure

    Equivalence Testing

    sv

    dvt

    dv

    0

    5

    10

    15

    20

    25

    Mea

    n N

    umbe

    r Of M

    atch

    ing

    Resp

    onse

    s

    Exp 1 Exp 2Exposure

    Matching

    dvt

    dv

  • 419Auditory EquivAlEncE And non-ArbitrAry rElAtions

    results of this study were that a total of 11 participants demonstrated equivalence responding with all auditory stimuli. However, whereas Dube et al. used a two-comparison format to produce two 3-member equivalence relations, the current study used a three-comparison format to produce three 3-member equivalence relations.

    With regard to the variable of number of participants showing equivalence, in Dube et al. (1993), three of six participants passed the first equivalence test (including one person whose equivalence relations were argued to be based on reject relations), and five of six passed the second test. At first blush, the results of the procedure used in the current study do not appear as good. In the SV condition, in which all stimuli were presented in the same voice and that thus allows the most direct comparison with Dube et al., only 3 out of 11 participants passed on the first equivalence test, while 7 out of 11 passed the second. However, a number of factors need to be considered. First, the current study was testing for three 3-member relations, whereas Dube et al. tested for two 3-member relations. Previous research suggests that equivalence formation with visual stimuli is a function of class or relation size (Arntzen & Holth, 2000), and this may be the same, or perhaps even more pronounced, in the case of auditory stimuli. Second, because the present study sought to explore non-arbitrary interference and to compare findings with Stewart et al. (2002), it provided an equivalence test only, whereas Dube et al. tested for a range of derived relations, including symmetry and transitivity. Further research into the effect of these variables on equivalence formation with auditory stimuli is needed. For the present, however, it is certainly possible that their influence affected the number of participants demonstrating equivalence in this study.

    A major aim of this study was to test for non-arbitrary relational interference with equivalence relational responding. A previous study (Stewart et al., 2002) reported non-arbitrary interference with equivalence responding in the context of visual stimuli. The present study attempted to extend this work by examining this effect in the context of auditory stimuli. In one condition (Same Voice), auditory stimuli were presented in the same pre-recorded voice throughout equivalence training and testing. In a second condition (Different Voice Test), stimuli in training were presented in the same pre-recorded voice, while those in testing were presented in three different voices. On every trial, there was a potential conflict of stimulus control based on the fact that the correct (equivalent) comparison stimulus was in a different voice from the sample, while one of the two incorrect comparisons was in the same voice as the sample and thus presented an opportunity for non-arbitrary matching. In a third condition, participants received the same testing as the second condition but were provided with a history of training in which different voices were employed but voice was irrelevant with respect to the contingencies for correct responding.

    Findings were similar to those of Stewart et al. (2002) in a number of ways. First, there was a higher incidence of individuals passing criteria for equivalence and also significantly higher average levels of equivalence responding in the Same Voice and Different Voice conditions relative to the Different Voice Test condition. This is analogous to the Stewart et al. finding of higher levels of equivalence in the Same Color and All Color conditions compared with the Color Test condition. Furthermore, in the current study, there was a significantly higher level of non-arbitrary relational responding or matching observed in the Different Voice Test condition than in the Different Voice condition. These results are also analogous to those of Stewart et al. (the relationship between the conditions is similar to that between the Color Test and All Color conditions) and are consistent with the suggestion that the lower levels of equivalence in the Different Voice Test condition are the result of interference by non-arbitrary relations.

    Despite broad similarities, there were also some differences in findings between this study and that of Stewart et al. (2002). First, the mean number of training trials required to meet criterion for equivalence testing in Exposure 1 was considerably lower in the current study than in Stewart et al.s for all three conditions. More specifically, in the current study, the numbers of trials required for the Same Voice, Different Voice, and Different Voice

  • 420 Stewart and LaveLLe

    Test conditions were, respectively, 108.7, 155.5, and 138.7, while in the Stewart et al. study, the numbers of trials required for the No Color, All Color, and Color Test, conditions were, respectively, 347.4, 499.1, and 443.8. This suggests that the use of auditory stimuli resulted in more efficient, albeit slower (i.e., more time was required per trial), learning of the conditional discriminations than the use of visual stimuli. Both the greater efficiency in terms of number of trials required and the slower pace of the training might be functions of the successive nature of the conditional discriminative training involved in the case of auditory stimuli. Future research might explore this issue further by, for example, examining efficiency and time needed in the case of training analogous successive visual conditional discriminations.

    A second difference between findings for this study and that of Stewart et al. (2002) has to do with levels of equivalence. Table 2 shows the proportions (and percentages) of individuals passing equivalence in each equivalence test exposure in each of the three conditions in both Stewart et al. and the current study. In Stewart et al., the No Color group had just 3/8 (37.5%) individuals passing equivalence across both exposures, whereas in the current study, the analogous Same Voice condition had a total of 7/11 (64%) individuals passing equivalence across both exposures. This greater facilitation of the derivation of equivalence relations might, again, be a function of the use of auditory stimuli or, perhaps more specifically, of successive discrimination procedures. However, this facilitation did not seem to apply in the case of the other two conditions, for which levels of equivalence are similar to their analogous conditions in Stewart et al. For the condition in which non-arbitrary interference was expected to be maximal (i.e., the Different Voice Test condition), and in which there were no instances of passing equivalence, this is perhaps not that surprising. However, for the Different Voice condition, which yielded less than half the equivalence passes of the Same Voice condition (i.e., 3/11 versus 7/11), when its analogue in Stewart et al. (i.e., All Color) did as well as the analogue of the Same Voice condition (i.e., 3/8 versus 3/8), there seems to be something additional to be explained. In this case, it may be that the presence of different voices was proportionately more disruptive in the auditory context than the presence of different colors was in the visual context.

    Table 2Proportions (and Percentages) of Equivalence Passes in Equivalence Test Exposures 1 and 2 for the Three Conditions in Stewart et al. (2002) and Their Counterparts in the Current Study

    Exposure

    Stewart et al. (2002) conditionsCurrent study

    conditions

    No Color All Color Color TestSame Voice

    Different Voice

    Different Voice Test

    1 2/8 (25%) 2/8 (25%) 0/8 (0%) 4/11 (36%) 2/11 (18%) 0/11 (0%)2 3/8 (38%) 3/8 (38%) 0/8 (0%) 7/11 (64%) 3/11 (27%) 0/11 (0%)

    Note. Criterion for equivalence passes was 32 out of 36, or 90%, correct.The reason that the presence of different voices might be more disruptive in the

    auditory than in the visual case is unknown. Again, some aspect of the use of auditory stimuli themselves or the use of successive conditional discrimination procedures, or both, may be responsible. At the present time, given the relative lack of empirical work in the domain of auditory conditional discriminations, especially as a basis for equivalence, attempted explanations of these effects can only be speculative. More work will be needed to investigate this area. Such research might systematically compare auditory conditional discrimination procedures with both simultaneous and successive visual conditional discrimination procedures as methods of producing equivalence and/or other derived relations. Furthermore, one dimension of this research might involve investigation of possible differences between the auditory and visual domains with respect to the influence of non-arbitrary dimensions. Stimulus control topography (McIlvane & Dube, 2003; Ray & Sidman, 1970) analyses of the influence of topographical aspects of visual and auditory conditional discrimination formats might be particularly useful in this regard.

  • 421Auditory EquivAlEncE And non-ArbitrAry rElAtions

    In regard to the outcomes of the present study, if the mere presence of different voices is more disruptive in the auditory than in the visual domain, then the disruption of equivalence in the current study might be relatively more influenced by the mere presence of different formats (e.g., voice types) than was the case in Stewart et al. (2002). In other words, perhaps the disruption of equivalence in the current study is proportionately more based on mere presence of different formats throughout training and testing and proportionately less based on non-arbitrary relations as an alternative to equivalence responding. The evidence concerning patterns of non-arbitrary matching in testing bears this out. Though there certainly is evidence in the current study of interference with equivalence by non-arbitrary matching, since there was a significantly higher average level of matching in the Different Voice Test condition than in the Different Voice condition, there seemed to be relatively less such interference in this study than in Stewart et al.s. This can be seen by comparing across studies for the average level of matching occurring across both exposures in the two conditions in which matching would have been predicted to be maximal, namely, the Different Voice Test and Color Test conditions. Levels of matching for these two conditions were 16.6 out of 36 (46%) and 25.2 out of 36 (70%), respectively, suggesting that much more matching was occurring in the Color Test than in the Different Voice Test condition and, thus, that the lower level of equivalence in the current study was proportionately more a result of disruption caused by the mere presence of different formats than non-arbitrary relational interference. As previously suggested, further research analyzing stimulus control aspects of auditory conditional discrimination procedures, especially as the basis for equivalence or other derived relations and in particular as influenced by non-arbitrary relations, is warranted. Such research may allow a better understanding of these effects.

    The current findings suggest a number of other directions for future work also. For example, the current study adopted only one of many different training protocols that have been reported in the literature on derived stimulus relations. It has been shown that testing for symmetry before testing for transitivity or equivalence may facilitate the emergence of derived relations (see, e.g., Adams, Fields, & Verhave, 1993). Accordingly, Stewart et al. (2002) suggested that testing for symmetry before testing for equivalence relations in a context such as they employed in their research might affect testing outcomes substantially. Further exploration of this or related variables (e.g., prior transitivity testing) might also be conducted with respect to the current protocol. In addition, as interference has now been demonstrated in both visual and auditory domains, another possible extension of the current research might be to examine whether and to what extent interference might occur cross-modally. For example, to what extent might potentially conflicting non-arbitrary relations in the visual realm affect equivalence with auditory stimuli, and to what extent might potentially conflicting non-arbitrary relations in the auditory realm affect equivalence with visual stimuli?

    In conclusion, the current research has extended previous work by Dube et al. (1993) by demonstrating derived equivalence with auditory stimuli. This study employed a protocol more similar to conventional MTS procedures than Dube et al. and used it to facilitate derivation of three 3-member equivalence relations. It also investigated non-arbitrary interference with auditory equivalence using a design similar to that used previously by Stewart et al. (2002) in the visual domain and provided evidence that non-arbitrary relations can interfere with equivalence relations in the auditory domain, just as Stewart et al. had done in the visual. These findings cohere with the RFT account of the historical importance of non-arbitrary relations and extend previous work on non-arbitrary interference into the auditory domain. Given the link between derived relations and language and the fact that auditory stimuli play such an important role in the latter, investigation of derived relations between auditory stimuli is a potentially important area of research to pursue, and it is hoped that the methods and findings described here will guide further work in this domain.

  • 422 Stewart and LaveLLe

    referencesADAMS, B. J., FIELDS, L., & VERHAVE, T. (1993). Effects of test order on intersubject

    variability during equivalence class formation. The Psychological Record, 43, 133152.

    ALMEIDA-VERDU, A. C., HUZIWARA, E. M., DE SOUZA, D. G., DE ROSE, J. C., BEVILACqUA, M. C., LOPES, J., JR., MCILVANE, W. J. (2008). Relational learning in children with deafness and cochlear implants. Journal of the Experimental Analysis of Behavior, 89(3), 40724. doi:10.1901/jeab.2008-89-407

    ARNTZEN, E., & HOLTH, P. (2000). Probability of stimulus equivalence as a function of class size versus number of classes. The Psychological Record, 50, 79104.

    BARNES, D., MCCULLAGH, P. D., & KEENAN, M. (1990). Equivalence class formation in non hearing impaired children and hearing impaired children. The Analysis of Verbal Behaviour, 8, 1930.

    BARNES-HOLMES, D., STAUNTON, C., WHELAN, R., BARNES-HOLMES, Y., COMMINS, S., WALSH, D., DYMOND, S. (2005). Derived stimulus relations, semantic priming, and event-related potentials: Testing a behavioral theory of semantic networks. Journal of the Experimental Analysis of Behavior, 84(3), 417433. doi:10.1901/jeab.2005.78-04

    BISSETT, R. T., & HAYES, S. C. (1998). Derived stimulus relations produce mediated and episodic priming. The Psychological Record, 48, 617630.

    DEVANY, J. M., HAYES, S. C., & NELSON, R. O. (1986). Equivalence class formation in language-able and language-disabled children. Journal of the Experimental Analysis of Behavior, 46, 243257. doi:10.1901/jeab.1986.46-243

    DE ROSE, J., DE SOUZA, D. G., & HANNA, E. S. (1996). Teaching reading and spelling: Exclusion and stimulus equivalence. Journal of Applied Behavior Analysis, 29, 451469. doi:10.1901/jaba.1996.29-451

    DUBE, W. V., GREEN, G., & SERNA, R. W. (1993). Auditory successive conditional discrimination and auditory stimulus equivalence classes. Journal of the Experimental Analysis of Behavior, 59, 103114. doi:10.1901/jeab.1993.59-103

    DUGDALE, N., & LOWE, C. F. (2000). Testing for symmetry in the conditional discriminations of language-trained chimpanzees. Journal of the Experimental Analysis of Behavior, 73, 522. doi:10.1901/jeab.2000.73-5

    GAST, D. L., VANBIERVLIET, A., & SPRADLIN, J. E. (1979). Teaching number-word equivalences: A study of transfer. American Journal of Mental Deficiency, 83, 524527.

    GREEN, G. (1990). Differences in development of visual and auditory-visual equivalence relations. American Journal on Mental Retardation, 95, 260270.

    GROSKREUTZ, N. C., KARSINA, A. J., MIGUEL, C. F., & GROSKREUTZ, M. P. (2010). Using conditional discrimination training with complex auditory-visual samples to produce emergent relations in children with autism. Journal of Applied Behavior Analysis, 43, 131136.

    HAYES, S. C., BARNES-HOLMES, D., & ROCHE, B. (2001). Relational frame theory: A post- Skinnerian account of human language and cognition. New York, NY: Plenum.

    KELLY, S., GREEN, G., & SIDMAN, M. (1998). Visual identity matching and auditory-visual matching: A procedural note. Journal of Applied Behavior Analysis, 31, 237243. doi:10.1901/jaba.1998.31-237

    MCILVANE, W. V., & DUBE, W. V. (2003). Stimulus control topography coherence theory: Foundations and extensions. The Psychological Record, 26(2), 195213.

    OHORA, D., PELAEZ., M., & BARNES-HOLMES, D. (2005). Derived relational responding and performance on verbal subtests of the WAIS-III. The Psychological Record, 55(1), 155175.

  • 423Auditory EquivAlEncE And non-ArbitrAry rElAtions

    RAY, B. A., & SIDMAN, M. (1970). Reinforcement schedules and stimulus control. In W. N. Schoenfeld (Ed.), The theory of reinforcement schedules (pp. 187214). New York, NY: Appleton-Century-Crofts.

    SCHUSTERMAN, R. J., & KASTAK, D. (1998). Functional equivalence in a California sea lion: Relevance to animal social and communicative interactions. Animal Behaviour, 55, 10871095.

    SIDMAN, M. (1971). Reading and auditory-visual equivalences. Journal of Speech and Hearing Research, 14, 513.

    SIDMAN, M., & CRESSON, O., JR. (1973). Reading and crossmodal transfer of stimulus equivalences in severe retardation. American Journal of Mental Deficiency, 77, 515523.

    SIDMAN, M., RAUZIN, R., LAZAR, R., CUNNINGHAM, S., TAILBY, W., & CARRIGAN, P. (1982). A search for symmetry in the conditional discriminations of rhesus monkeys, baboons and children. Journal of the Experimental Analysis of Behavior, 37, 2344. doi:10.1901/jeab.1982.37-23

    SIDMAN, M., & TAILBY, W. (1982). Conditional discrimination vs. matching to sample: An expansion of the testing paradigm. Journal of the Experimental Analysis of Behavior, 37, 522. doi:10.1901/jeab.1982.37-5

    SMEETS, P. M., & BARNES-HOLMES, D. (2005). Auditory-visual and visual-visual equivalence relations in children. The Psychological Record, 55(3), 483503.

    STEELE, D., & HAYES, S. C. (1991). Stimulus equivalence and arbitrarily applicable relational responding. Journal of the Experimental Analysis of Behavior, 56(3), 519555. doi:10.1901/jeab.1991.56-519

    STEWART, I., BARNES-HOLMES, D., ROCHE, B., & SMEETS, P. (2002). Stimulus equivalence and non-arbitrary relations. The Psychological Record, 52, 7788.

    STEWART, I., & MCELWEE, J. (2009). Relational responding and conditional discrimination procedures: An apparent inconsistency and clarification. Behavior Analyst, 32(2), 309317.

    WARD, R., & YU, D. C. T. (2000). Bridging the gap between visual and auditory discrimination learning in children with autism and severe developmental disabilities. Journal on Developmental Disabilities, 7, 142155.

  • 424 Stewart and LaveLLe

    Appendix AThis table shows the 36-trial block of trials that was cycled during training for each of

    the three conditions. Alphanumerics (e.g., A1) represent the particular nonsense syllables that were presented in auditory mode. For participants in two of the conditions (Same Voice and Different Voice Test), all nonsense syllables were pronounced in the same voice (i.e., a childs voice). For the third condition (Different Voice), the nonsense syllables were pronounced in three different voices. The letters M, W, and C indicate the type of voice in which particular nonsense syllables were pronounced on particular trials and stand for man, woman, and child, respectively.

    1 A2(C) 2 A1(W) 3 A3(M) 4 A1(M) B2(C) B3(W) B1(M) B2(W) B1(C) B3(M) B1(W) B3(C) B2(M) B3(W) B2(C) B1(M)5 A3(C) 6 A2(W) 7 A3(W) 8 A2(M)B3(M) B1(C) B2(W) B1(W) B2(M) B3(C) B1(M) B2(C) B3(W) B3(M) B2(W) B1(C)9 A1(C) 10 A3(C) 11 A1(M) 12 A2(M)B1(C) B3(W) B2(M) B2(C) B3(W) B1(M) B3(M) B1(C) B2(W) B1(W) B3(C) B2(M)13 A3(W) 14 A1(C) 15 A2(W) 16 A1(W)B2(W) B1(M) B3(C) B1(W) B2(C) B3(M) B3(C) B1(W) B2(M) B2(M) B3(W) B1(C)17 A2(C) 18 A3(M) 19 B1(C) 20 B2(W)B2(W) B1(M) B3(C) B3(M) B2(W) B1(C) C3(W) C1(M) C2(C) C3(C) C2(M) C1(W)21 B3(M) 22 B2(C) 23 B1(M) 24 B3(W)C3(C) C1(W) C2(M) C1(M) C2(C) C3(W) C1(W) C3(M) C2(C) C1(M) C3(W) C2(C)25 B2(C) 26 B1(C) 27 B3(W) 28 B1(W)C2(M) C3(C) C1(W) C2(M) C3(W) C1(C) C3(M) C2(W) C1(C) C1(C) C2(M) C3(W)29 B3(M) 30 B2(M) 31 B1(W) 32 B2(M)C2(W) C1(C) C3(M) C3(C) C1(W) C2(M) C2(W) C1(C) C3(M) C1(C) C3(M) C2(W)33 B3(C) 34 B2(W) 35 B3(C) 36 B1(M)C2(M) C3(W) C1(C) C2(M) C1(W) C3(C) C1(M) C2(C) C3(W) C3(W) C2(C) C1(M)

  • 425Auditory EquivAlEncE And non-ArbitrAry rElAtions

    Appendix bThis table shows the 36-trial block of testing trials presented to participants in each of

    the three conditions. Alphanumerics (e.g., A1) represent the particular nonsense syllables that were presented in auditory mode. For participants in one of the conditions (Same Voice), all nonsense syllables were pronounced in the same voice (i.e., a childs voice), as during training. For the other two conditions (Different Voice and Different Voice Test), the nonsense syllables were pronounced in three different voices. The letters M, W, and C indicate the type of voice in which particular nonsense syllables were pronounced on particular trials and stand for man, woman, and child, respectively.

    1 C2(M) 2 C3(W) 3 C2(C) 4 C1(C)A1(C) A3(M) A2(W) A3(M) A2(C) A1(W) A1(M) A2(W) A3(C) A3(C) A1(W) A2(M)5 C3(C) 6 C1(W) 7 C2(M) 8 C3(M)A2(M) A3(W) A1(C) A1(M) A2(C) A3(W) A3(W) A1(M) A2(C) A2(M) A3(C) A1(W)9 C1(M) 10 C2(W) 11 C1(M) 12 C3(W)A1(W) A2(M) A3(C) A1(W) A3(C) A2(M) A3(C) A2(M) A1(W) A1(M) A3(C) A2(W)13 C2(C) 14 C1(W) 15 C3(C) 16 C2(W)A2(M) A3(W) A1(C) A2(W) A1(M) A3(C) A1(C) A2(M) A3(W) A2(C) A1(M) A3(W)17 C3(M) 18 C1(C) 19 C2(M) 20 C3(W)A3(C) A1(W) A2(M) A2(W) A3(C) A1(M) A1(C) A3(M) A2(W) A3(M) A2(C) A1(W)21 C2(C) 22 C1(C) 23 C3(C) 24 C1(W)A1(M) A2(W) A3(C) A3(C) A1(W) A2(M) A2(M) A3(W) A1(C) A1(M) A3(C) A2(W)25 C2(M) 26 C3(M) 27 C1(M) 28 C2(W)A3(W) A1(M) A2(C) A2(M) A3(C) A1(W) A1(W) A2(M) A3(C) A1(W) A3(C) A2(M)29 C1(M) 30 C3(W) 31 C2(C) 32 C1(W)A3(C) A2(M) A1(W) A1(M) A3(C) A2(W) A2(M) A3(W) A1(C) A2(M) A1(W) A3(C)33 C3(C) 34 C2(W) 35 C3(M) 36 C1(C)A1(C) A2(M) A3(W) A2(C) A1(M) A3(W) A3(C) A1(W) A2(M) A2(W) A3(C) A1(M)

  • Reproduced with permission of the copyright owner. Further reproduction prohibited withoutpermission.