SERIAL VS. PARALLEL PHONOLOGICAL GRAMMARS: Learners...
Transcript of SERIAL VS. PARALLEL PHONOLOGICAL GRAMMARS: Learners...
SERIAL VS. PARALLEL
PHONOLOGICAL GRAMMARS: Learners, Errors and
Consequences
Anne-Michelle Tessier University of Alberta Feb 8 2013
University of Toronto
Q: Do phonological processes apply in serial or in parallel?
A selective continuum of answers across theories: • SPE rules: very serial • Lexical Phonology, mostly serial Prosodic phonology rules: • Classic OT constraints: very parallel • Stratal OT: mostly parallel
• Harmonic Serialist constraints: very serial!
Serial vs. parallel processes, from the constraint-based perspective
The Classic OT view: everything at once, all in parallel With great power comes great problems - global interactions: predicting unattested languages - no intermediate forms: failing to predict opacities, etc.
(summaries: McCarthy 2007, McCarthy 2011)
/fonɑlədʒi/ Max-C Max- V *Unstressed
Full-V
Ident-VPlace
fo(ˈnɑ.lə)(ˌdʒi) *!
fə (ˈnɑ.lə)(ˌdʒi) *
(ˈfnɑɫ)(ˌdʒi) *!*
(ˈsɪn.tæks) *!*** *!***
Serial vs. parallel processes, from the constraint-based perspective
The Harmomic Serialist (HS) view: one ordered thing at a time, slow and steady harmonic improvement
Upshot: - a finite candidate set of small input-based changes - for what benefits? at what costs?
/fo.ˈnɑ.lə.ˌdʒi/ Max-C Max- V *Unstressed
Full-V
Ident-
VPlace
fo.ˈnɑ.lə.ˌdʒi *!
fə.ˈnɑ.lə.ˌdʒi *
ˈfnɑ.lə.ˌdʒi *!
(ˈsɪn.tæks) not in the candidate set!
Serial vs. parallel constraint interactions from the learning perspective
Successes in OT learning • formal algorithms that approximate child development
(Tessier 2007, 2009, 2010; Jesney &Tessier 2008, 2011; Becker & Tessier 2011)
• what’s easy: phonotactics
observed: [dɑgz] [kæts] voicing assimilation • what’s harder: alternations
observed: [dɑgz], [kæts], [fɪʃəz] ‘plural’ UR = /-z/
Serial vs. parallel constraint interactions from the learning perspective
Successes in OT learning • formal algorithms that approximate child development
(Tessier 2007, 2009, 2010; Jesney &Tessier 2008, 2011; Becker & Tessier 2011)
• what’s easy: phonotactics
observed: [dɑgz] [kæts] voicing assimilation • what’s harder: alternations
observed: [dɑgz], [kæts], [fɪʃəz] ‘plural’ UR = /-z/
HS as a theory of learning? • Will the old tools work? • What gets harder? Easier? Different? • Does typology improve? • Does the learner trajectory match human learning?
Roadmap of the talk
1. A Biased Introduction to Error-driven OT & HS learning
2. One Advantage of Serial Learning: Avoiding Backtracking - A gradual OT (but not HS) learner makes odd errors - Diary study evidence: such errors not child-typical
3. One Challenge for Serial Learning: Acquiring Inventories - HS needs more, hidden, rankings than OT - Hidden rankings are hard to learn
4. Interim Conclusions and Take Home Messages - Learning crux: HS shrinks the candidate space - Maybe this is a good thing?
Tessier (2009); building on Prince and Tesar (2004), Hayes (2004) interalia
Learners hypothesize one grammar at a time
but store all previous forms and errors
Learners are triggered to change grammars by the accumulation of error types
See also Becker and Tessier (2011) about variation in production
1a. An Error-Driven OT Learner
A Categorical Grammar
A Grammar
Grammar at Work
Grammar’s Results
/sɑk/ *Fric *Coda Ident[cont] Max
sɑk *! *!
sɑ *! *
tɑk *! *
tɑ * *
/sɑk/ [tɑ] 100% of the time *[sɑk], *[sɑ], *[tɑk] 0% of the time /zu/ [du] 100% of the time /dɑg/ [dɑ] 100% of the time…
H-INITIAL *FRICATIVE, *CODA >> IDENT-[CONT], MAX
Stage 1: Making Mistakes
A Grammar
Grammar at Work
Analyzing this error:
/sɑk/ *Fric *Coda Ident[cont] Max
sɑk *! *!
sɑ *! *
tɑk *! *
tɑ * *
H-INITIAL *FRICATIVE, *CODA >> IDENT-[CONT], MAX
*Fric *Coda Ident[cont] Max
sɑk ~ tɑ L L W W
Storing Mistakes
Caching a grammar’s errors
*Fric *Coda Ident[cont] Max
sock sɑk ~ tɑ L L W W
zoo zu ~ du L W
dog dɑg ~ dɑ L W
eyes aɪz ~ aɪ L L W
Learning from Mistakes
Choosing a Cached error
*Fric *Coda Ident[cont] Max
sock sak ~ ta L L W W
zoo zu ~ du L W
dog dɑg ~ dɑ L W
eyes aɪz ~ aɪ L L W
*FRIC >> MAX >> *CODA >> IDENT[CONT]
Learning from Mistakes
Archiving this error:
Re-ranking:
via Biased Constraint Demotion
(BCD:
Prince and Tesar, 2004)
*FRIC, *CODA >> IDENT[CONT] , MAX
*Fric *Coda Ident[cont] Max
dɑg ~ dɑ L W
H-NEW *FRIC >> MAX >> *CODA >> IDENT[CONT]
Result: Slight Improvement
New Grammar At Work
*Fric *Coda Ident[cont] Max
dɑg ~ dɑ L W
/sɑk/ *Fric Max *Coda Ident[cont]
sɑk *! *
sɑ *! *
tɑk * *
tɑ *! *
Stage Two: Cacheing...
Choosing a Cached error
*Fric *Coda Ident[cont] Max
sock sɑk ~ tɑk L W
zoo zu ~ du L W
dog dɑg ~ dɑg
eyes aɪz ~ aɪd L W
MAX >> *CODA >> IDENT[CONT] >> *FRIC
Stage Two: Learning...
Re-ranking via BCD:
*FRIC >> MAX >> *CODA >> IDENT[CONT]
*Fric *Coda Ident[cont] Max
dɑg ~ dɑ L W
sɑk ~ tɑk L W
H-NEW MAX >> *CODA >> IDENT[CONT] >> *FRIC
Result: Success!
Target Grammar At Work
*Fric *Coda Ident[cont] Max
dɑg ~ dɑ L W
sɑk ~ tɑk L W
/sɑk/ Max *Coda Ident[cont] *Fric
sɑk * *
sɑ *! *
tɑk * *!
tɑ *!
Learner Trajectory:
/sɑk/ *Fric *Coda Ident Max
sɑk *! *!
sɑ *! *
tɑk *! *
tɑ * * /sɑk/ *Fric Max *Coda Ident
sɑk *! *
sɑ *! *
tɑk * *
tɑ *! * /sɑk/ Max *Coda Ident *Fric
sɑk * *
sɑ *! *
tɑk * *!
tɑ *!
1b. A Comparable HS Learner
McCarthy (2000, 2008ab, etc.)
See also Staubs and Pater (2012), in prep.
Like OT: input output mappings
markedness/faith constraints
typologically driven
Unlike OT: serial evaluation
multiple derivations
finite candidate set
Mappings in HS via Derivation
/sɑk/ *Fric *Coda Ident [cont]
Max
sɑk *! *!
sɑ *! *
tɑk * *
**ta not in this candidate set!
* *
GEN(/input/) = [cand-1], fully faithful [cand-2], violates Ident[cont] only [cand-3], violates Max only
Upshot: - finite number of candidates - ‘one step away’ from input - candidate space crucially
depends on set of Faith
Mappings in HS via Derivation
/sɑk/ *Fric *Coda Ident Max
sɑk *! *!
sɑ *! *
tɑk * *
/tɑk/ *Fric *Coda Ident Max
tɑk *!
tɑ *
sɑk *! * *
If Eval returns unfaithful cand: - [optima] for mapping-n is
now /input/ for mapping-n+1:
Mappings in HS via Derivation
/sɑk/ *Fric *Coda Ident Max
sɑk *! *!
sɑ *! *
tɑk * *
/tɑk/ *Fric *Coda Ident Max
tɑk *!
tɑ *
sɑk *! * *
If Eval returns unfaithful cand: - [optima] for mapping-n is
now /input/ for mapping-n+1:
- re-apply GEN...
- note [ta] IS now a candidate!
Mappings in HS via Derivation
/sɑk/ *Fric *Coda Ident Max
sɑk *! *!
sɑ *! *
tɑk * *
/tɑk/ *Fric *Coda Ident Max
tɑk *!
tɑ *
sɑk *! * *
/tɑ/ *Fric *Coda Ident Max
tɑ
sɑ *! *
tɑk *!
If Eval returns FAITHful cand: - derivation is finished - [optima] for mapping-n is the
final output winner
Here: /sak/ tak [ta]
Issue: learning multi-step derivations?
/sɑk/ *Fric *Coda Ident Max
sɑk *! *!
sɑ *! *
tɑk * *
/tɑk/ *Fric *Coda Ident Max
tɑk *!
tɑ *
sɑk *! * *
/tɑ/ *Fric *Coda Ident Max
tɑ
sɑ *! *
tɑk *!
How could the learner compare sak ~ ta?
They do not co-exist
in one tableau!
The learner’s error /sak/ tak [ta] What can you take from this error?
Proposal for multi-step derivations
/sɑk/ *Fric *Coda Ident Max
sɑk *! *!
sɑ *! *
tɑk * *
Use the first step
*Fric *Coda Ident Max
sɑk ~ tɑk L W
The HS Cache – Storing First Steps
Grammar
Derivations
Cache: First Steps only
*Fric *Coda Ident[cont] Max
sock sak ~ tak L W
zoo zu ~ du L W
dog dɑg ~ dɑ L W
shoes ʃuz ~ tud L W
*FRIC >> *CODA >> IDENT[CONT] >> MAX
sock /sak/ tak [ta]
zoo /zu/ [du]
dog /dag/ [da]
shoes /ʃuz/ tud [tu]
HS Trajectory: Three Stages
/sɑk/ *Fric *Coda Ident Max
sɑk *! *!
sɑ *! *
tɑk * *
/tɑk/ *Fric *Coda Ident Max
tɑk *!
tɑ *
sɑk *! * *
/tɑ/ *Fric *Coda Ident Max
tɑ
sɑ *! *
tɑk *!
Stage One: /sak/ tak [ta]
/sɑk/ *Coda Ident *Fric Max
sɑk *! *!
sɑ *!
tɑk * *
/sɑ/ *Coda Ident *Fric Max
sɑk *! *
sɑ *
tɑ * *!
(A) Stage Two: /sak/ [sa]
/sɑk/ Max *Coda Ident *Fric
sɑk * *
sa *! *
tɑk * *!
Stage Three /sak/ [sak]
2a. A Serialist Success in Learning
A question: why is grammar change gradual? In both of these OT and HS approaches: - learner only learns from the Archive, not
everything Cached... - so: order of acquisition depends on which errors
get Archived - problem: learning from the wrong errors predicts
particularly weird backtracking
The Error-Selective Learning Idea
*Fric *Coda Ident[cont] Max
sock sak ~ ta L L W W
zoo zu ~ du L W
dog dɑg ~ dɑ L W
eyes aɪz ~ aɪ L L W
The OT Error Selection Algorithm (Tessier 2007, 2009) – choose errors that are simple – learn one thing at a time
– pick errors with as few Ls as possible...
The Error-Selective Learning Idea
*Fric *Coda *Comp
Onset
*Comp
Coda
Ident [cont]
Max
sock sak ~ ta L L W W
zoo zu ~ du L W
dog dɑg ~ dɑ L W
eyes aɪz ~ aɪ L L W
strengths stɹɛŋkθs ~ tɛ L L L L W W
The OT Error Selection Algorithm (Tessier 2007, 2009) – choose errors that are simple – learn one thing at a time
Error Selection: Ambiguity!
A Global Approach: Parallel OT One possible learning path:
*Fric *Coda *VOICED VELARSTOP
*VOICED FRIC
Ident[cont] Max
sock sak ~ ta L L W W
zoo zu ~ du L L W
dog dɑg ~ dɑ L L W
eyes aɪz ~ aɪ L L L W
The OT Error Selection Algorithm (Tessier 2007, 2009) – often, many errors will be tied for fewest Ls – sometimes, the ESA has to pick at random
Error Selection: Ambiguity!
A Global Approach: Parallel OT One possible learning path:
*Fric *Coda *VOICED VELARSTOP
*VOICED FRIC
Ident[cont] Max
sock sak ~ ta L L W W
zoo zu ~ du L L W
dog dɑg ~ dɑ L L W
eyes aɪz ~ aɪ L L L W
The OT Error Selection Algorithm (Tessier 2007, 2009) – often, many errors will be tied for fewest Ls – sometimes, the ESA has to pick at random
- multiple Ls in an error can cause odd learning paths
Error Selection: Not Selective Enough
A Global Approach: Parallel OT One possible learning path:
To build a grammar in which winner beats loser: - install at least ONE W-preferring constraint above ALL L-preferring constraints
(Prince and Smolensky, 1993/2004; Prince and Tesar, 2004):
Possible Stage 2: Max >> *Fric, *Coda >> Ident[cont]
winner ~ loser *Fric *Coda Ident[cont] Max
sɑk ~ tɑ L L W W
/sɑk/ Max *Fric *Coda Ident
sɑk *! *
sɑ *! *
tɑk * *
tɑ *!
Error Selection: Not Selective Enough
A Global Approach: Parallel OT One possible learning path:
To build a grammar in which winner beats loser: - install W-preferring constraints that resolve the most errors - here: Ident[cont] (Prince and Tesar, 2004; Hayes, 2004)
Resulting Stage 3: Ident >> *Fric, *Coda >> Max
winner ~ loser *Fric *Coda Ident[cont] Max
sɑk ~ tɑ L L W W
sak ~ tak L W
/sɑk/ Ident *Fric *Coda Max
sɑk * *!
sɑ * *
tɑk *!
ta *! *
Stage 1: /sak/ [ta] Stage 2: /sak/ [tak] Stage 3: /sak/ [sa] ...frics improve, codas regress
HS Errors: Always Gradual!
Grammar
Derivations
Cache: First Steps only
*Fric *Coda Ident[cont] Max
sock sak ~ tak L W
shoes ʃuz ~ tud L W
*FRIC >> *CODA >> IDENT[CONT] >> MAX
sock /sak/ tak [ta]
shoes /ʃuz/ tud [tu]
2b. Looking for backtracking
A diary study: Zack (Smith, 2010) Thanks: Philip Dilts (UofA)
This study: For all (2620) two-member CC onsets a) trajectory of 3 cluster types b) additional marked properties Question: Does acquisition of CC onsets ever cause backtracking of other structures? Or vice versa?
Zack: Complex Onset Development
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1;8.1 -2;3.22
2;3.22 -2;5.30
2;6.8 -2;6.15
2;6.16 -2;8.17
2;8.18 -3;2.1
3;2.2 -3;2.27
3;3.1 -3;4.1
3;4.2 -3;8.30
s-stop
stop-r
stop-l
% Faithful Onset Clusters (rather than Reduced)
Zack: Complex Onset Development
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1;8.1 -2;3.22
2;3.22 -2;5.30
2;6.8 -2;6.15
2;6.16 -2;8.17
2;8.18 -3;2.1
3;2.2 -3;2.27
3;3.1 -3;4.1
3;4.2 -3;8.30
s-stop
stop-r
stop-l
% Faithful Onset Clusters (rather than Reduced)
Codas vs. Stop-/l/-Onsets
17
13 4 10 30
4 4
6
2
30 11 29 65
11 17
48
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1;8.1 -2;3.22
2;3.22 -2;5.30
2;6.8 -2;6.15
2;6.16 -2;8.17
2;8.18 -3;2.1
3;2.2 -3;2.27
3;3.1 -3;4.1
3;4.2 -3;8.30
@stage2: coda obstruents begin to appear
codas vs. stop-/l/ 2;4.15-19 2;6.15-29
glass [da:t] [dra:s]
glasses [da.tɪd] [dra:.siz]
pliers [pae. əd] [pleɪt]
Upshot: As stop-liquid onsets emerge, coda obstruents do not regress
@stage3: stop-/l/ onsets appear
Coda Frics vs. StopLiquid-Onsets
0
26 5
12
3 2
0 0 0
10 2
28
77 13
17 25
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1;8.1 -2;3.22
2;3.22 -2;5.30
2;6.8 -2;6.15
2;6.16 -2;8.17
2;8.18 -3;2.1
3;2.2 -3;2.27
3;3.1 -3;4.1 3;4.2 -3;8.30
stop-liq vs. frics @ 2;6 @2;9
Gruff [dʌf] [drʌf]
close [trəud] [trəuz]
please [pid], [piz] [priz]
@stage 3-4: stop liqs onsets emerge
@stage4: coda frics begin to appear accurate
Upshot: As coda fricatives emerge, stop-liquid onsets do not regress
Velar Fronting vs. /s/-Stop Onsets
s-stop vs. velars @ 3;3 @ 3;5
scoop [stu:p] [sku:p]
sky [staɪ] [staɪ], [skaɪ]
stick [stɪt] [stɪt], [stɪk]
79 100 35 154 227 37 34
50
0 0 0 0 0 0 2
73
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1;8.1 -2;3.22
2;3.22 -2;5.30
2;6.8 -2;6.15
2;6.16 -2;8.17
2;8.18 -3;2.1
3;2.2 -3;2.27
3;3.1 -3;4.1
3;4.2 -3;8.30
@stage7: /s/-stop onsets emerge
@stage8: velar fronting begins to cease
Upshot: As velar place emerges, s-stop onsets do not regress
Summary of Serialist Success
HS learner can only see small /i/ [o] changes
So HS learner learns one thing at a time...
... and it knows what it’s learned each time
Upshot: its developmental trajectory doesn’t vacillate qualitatively between faith/repair
Neither, I think, do children
*Fric *Coda Ident[cont] Max
sɑk ~ tɑk L W
sɑk ~ sɑ L W
Known issue for HS learning:
- HS needs more rankings than OT
- ‘Hidden M rankings’: which ensure feeding orders among processes
- How does the error-driven learner find them?
- Phonotactics just got harder! Tessier (2012)
Tessier and Jesney (in prep)
3. A Serialist Failure in Learning
Effects of Markedness Rankings in HS
/sɑk/ *Fric *Coda Ident Max
sɑk *! *!
sɑ *! *
tɑk * *
/tɑk/ *Fric *Coda Ident Max
tɑk *!
tɑ *
sɑk *! * *
/tɑ/ *Fric *Coda Ident Max
tɑ
sɑ *! *
tɑk *!
Ranking: *Fric >> *Coda >> Faith Result: sak tak ta
/sɑk/ *Coda *Fric Ident Max
sɑk *! *
sɑ * *
tɑk *! *
/sɑ/ *Coda *Fric Ident Max
sɑ *!
tɑ *
sɑk *! *
/tɑ/ *Coda *Fric Ident Max
tɑ
sɑ *! *
tɑk *! *
Ranking: *Coda >> *Fric >> Faith Result: sak sa ta
Effects of Markedness Rankings in HS
/sɑk/ *Fric CH ID[cont] ID [place]
sɑk *!
tɑk * *
xɑk *! *
/tɑk/ *Fric CH ID[cont] ID [place]
tak *!
kɑk *
sɑk *! *
/kɑk/ *Fric CH ID[cont] ID [place]
kɑk
tɑk *! *
xɑk *! *
... If process driven by M1 feeds process driven by M2 Example: Consonant Harmony process only for stops Ranking: *Fric >> CH >> Faith /sak/ tak [kak]
When are HS M>>M rankings crucial?
/sɑk/ CH *Fric ID[cont] ID [place]
sɑk *
tɑk *! *
xɑk * *
When are HS M>>M rankings crucial?
Example: Consonant Harmony process only for stops Reverse Ranking: CH >> *Fric >> Faith Result: /sak/ [sak] CH blocks stopping!
/sɑk/ CH *Fric ID[cont] ID [place]
sɑk *!
tɑk *! *
xɑk *! *
kak * *
Parallel Evaluation = No Ordered Processes Any M >> F ranking: CH >> *Fric >> Faith *Fric >> CH >> Faith CH, *Fric >> Faith Result: /sak/ [kak]
In OT: this M >> M ranking not crucial
/sɑk/ *Fric CH ID[cont] ID [place]
sɑk *!
tɑk *! *
xɑk *! *
kak * *
Schematic
Example
A need for hidden M >> M rankings
- M1 process can crucially feed M2 process - M1 and M2 both satisfied in surface forms
M1: *B
M2: *Ab
1st step legal
input: due to M1 surface form:
/AB/ Ab [ab]
output form 2nd step
violating M2 due to M2
A need for hidden M >> M rankings
- For /AB/ [ab], two-step derivation is needed - Some M1 >> M2 ranking must drive the first step
/AB/ aB
Ab [ab]
Schematic
Example
- M1 process can crucially feed M2 process - M1 and M2 both satisfied in surface forms
A need for hidden M >> M rankings
/AB/ Hidden Ranking:
*B >> Ab
*B
Ab [ab]
*Ab
- M1 process can crucially feed M2 process - M1 and M2 both satisfied in surface forms
- For /AB/ [ab], two-step derivation is needed - Some M1 >> M2 ranking must drive the first step
Schematic
Example
A need for hidden M >> M rankings
- M1 process can crucially feed M2 process - M1 and M2 both satisfied in surface forms
- For /AB/ [ab], two-step derivation is needed - Some M1 >> M2 ranking must drive the first step
Schematic
Example
*A Hidden ranking:
/AB/ aB *A >> *aB
*aB
[ab]
A need for hidden M >> M rankings
- For /AB/ [ab], two-step derivation is needed - Some M1 >> M2 ranking must drive the first step
*aB blocks
/AB/ aB
*Ab
blocks
Ab [ab]
If all rankings are M2 >> M1, /AB/ will surface faithfully!
Schematic
Example
Summary of Serialist Failure
The HS learner sees only single /i/ [o] changes
But to learn a grammar with ordered changes, the HS grammar needs M >> M rankings
And those rankings will not come from evidence
.... So far: I don’t know how the learner should find them all cf. Tessier (2012): get a shovel and look...
cf. Staubs and Pater (2012): ask a different question...
Take Home Messages
A crucial way serial constraint interaction changes learning: • HS shrinks the candidate space • HS simplifies the options at each step
Learner Advantage: gradualness is inherently easier Learner Disadvantage: restrictiveness is inherently harder
• to capture phonotactics as well as alternations, the HS learner must hypothesize unfaithful inputs
• “what if I tried to say */be:n/??” Optimism: the small HS candidate set may help the search for unfaithful inputs, ATB
THANK YOU!
Questions, connections, challenges, complaints…
References
Becker, Michael and Anne-Michelle Tessier (2011). Trajectories of faithfulness in child specific phonology. Phonology 28(2): 163-196.
Campos-Astorkiza, Rebeka (2007). Minimal contrast and the phonology-phonetics interaction. Ph.D., University of Southern California.
Elfner, Emily (2011). Stress-epenthesis interactions in Harmonic Serialism. In John McCarthy and Joe Pater (eds.) Harmonic Grammar and Harmonic Serialism. Equinox Press.
Hayes, Bruce (2004). Phonological Acquisition in Optimality Theory: the early stages. In Kager, Rene, Joe Pater & Wim Zonneveld,(eds.), Fixing Priorities: Constraints in Phonological Acquisition. Cambridge, U.K.: Cambridge University Press
Jesney, Karen (2008). Positional Faithfulness, non-locality, and the Harmonic Serialism solution. In Suzi Lima, Kevin Mullin & Brian Smith (eds.), Proceedings of the 39th Meeting of the North East Linguistics Society (NELS 39). Amherst, MA: GLSA
Jesney, Karen and Anne-Michelle Tessier (2008). Gradual learning and faithfulness: consequences of ranked vs. weighted constraints. In Anisa Schardl, Martin Walkow & Muhammad Abdurrahman (eds.), Proceedings of the 38th Meeting of the North East Linguistics Society (NELS 38), volume 1, 375-388. Amherst, MA: GLSA
Jesney, Karen and Anne-Michelle Tessier (2011). Biases in Harmonic Grammar: the road to restrictive learning. Natural Language and Linguistic Theory 29(1): 251-290
References
McCarthy, John J. 2000. Harmonic Serialism and Parallelism. In M. Hirotani, A. Coetzee, N. Hall and J.-Y. (eds.) Proceedings of NELS 30. Amherst: GLSA: 501-524.
McCarthy, John J. (2007) Hidden Generalizations: Phonological Opacity in Optimality Theory. London: Equinox.
McCarthy, John J. 2008a. The gradual path to cluster simplification. Phonology 25:271-319.
McCarthy, John J. 2008b. The serial interaction of stress and syncope. Natural Language & Linguistic Theory (26)3: 499-546.
Prince, Alan and Paul Smolensky. 1993/2004. Optimality Theory: Constraint Interaction in Generative Grammar. Oxford, UK: Blackwell.
Prince, Alan and Bruce Tesar 2004. Learning Phonotactic Distributions. In R. Kager, W. Zonneveld, J. Pater, (eds.) Fixing Priorities: Constraints in Phonological Acquisition. Cambridge, UK: CUP.
Pruitt, Kathryn. 2010. Serialism and locality in constraint-based metrical parsing. Phonology 27(3): 481-526.Smolensky, P. 1996. On the comprehension/production dilemma in child language. Linguistic Inquiry 27: 720–31.
Smith, Neilson V. (2010). Acquiring Phonology: A cross generational case study. CUP.
Staubs, Robert and Joe Pater (2012). Learning serial constraint-based grammars. In J. J. McCarthy and J. Pater (eds.) Harmonic Grammar and Harmonic Serialism, Equinox.
References
Tessier, Anne-Michelle (2007). Biases and Stages in Phonological Acquisition. Ph.D., Umass Amherst.
Tessier, Anne-Michelle (2009). Frequency of Violation and Constraint-Based Phonological Learning. Lingua 119 (1): 6-41.
Tessier, Anne-Michelle (2010). UseListedError: a grammatical account of lexical exceptions in phonological acquisition. In S. Lima, K. Mullins and B. Smith (eds). Proceedings of NELS39 vol. 2 pp. 813-827. Amherst, MA: GLSA.
Tessier, Anne-Michelle (2012) Error-driven learning in Harmonic Serialism. To appear in the Proceedings of NELS42.