Post on 19-Jul-2020
ICSLP 94
1994International Conference
onSpoken Language Processing
September 18 -22,1994
Pacific Convention Plaza Yokohama(PACIFICO)
Yokohama, Japan
UB/TIB Hannover113 552 289
89
WEDNESDA Y MORNING Sep. 21Session 18: Speech Recognition in Adverse
Environments
Time: 09:00 to 12:15, September 21, 1994Place: Room AChairpersons:
John HL Hansen, Digital Speech ProcessingLaboratory, Dept. of Electrical Engineering, DukeUniversity, U.S.A.Noboru Sugamura, NTT Human Interface Laborato-ries, Japan
18.1 Compensation of Telephone Line Effects forRobust Speech Recognition 987C. Mokbel, P. Paches-Leal, D. Jouvet and J. Monne,France Telecom, CNET-LAA/TSS/RCP, 2 Route deTregastel, BP 40, 22300, Lannion Cedex, France
18.2 Telephone Line Characteristic Adaptation UsingVector Field Smoothing Technique 991Jun-ichi Takahashi and Shigeki Sagayama, NTTHuman Interface Laboratories, 1-2356, Take,Yokosuka, 238-03 Japan
18.3 A Study of Speech Recognition SystemRobustness to Microphone Variations:Experiments in Phonetic Classification 995Jane Chang and Victor Zue, Spoken LanguageSystems Group, Laboratory for Computer Science,Massachusetts Institute of Technology, Cambridge,MA 02139, U.S.A.
18.4 Isolated Word Recognition Using Models forAcoustic Phonetic Variability by Lombard Effect 999Tadashi Suzuki, Kunio Nakajima and Yoshiharu Abe,Computer and Information Systems Laboratory,Mitsubishi Electric Corporation, 5-1-1, Ofuna,Kamakura, 247 Japan
18.5 A Source Generator Based Production Model forEnviromental Robustness in Speech Recognition 1003John H.L. Hansen, Brian D. Womack, and Levent M.Arslan, Robust Speech Processing Laboratory,Dept. of Electrical Engineering, Duke University, Box90291, Durham, North Carolina 27708-0291, U.S.A.
18.6 A Frequency-Weighted Continuous Density HMMfor Noisy Speech Recognition 1007Hiroshi Matsumoto and Hiroyuki Imose, Faculty ofEngineering, Shinshu University, 500, Wakasato-cho, Nagano, 380 Japan
18.7 A Study on Adaptations of Cepstral and DeltaCepstral Coefficients for Noisy SpeechRecognition 1011Lee-Min Lee and Hsiao-Chuan Wang, Dept. ofElectrical Engineering, National Tsing Hua Univer-sity, Hsinchu, 30043, Taiwan
18.8 A Comparative Study of Feature Representationsfor Robust Speech Recognition in AdverseEnvironments 1015K. K. Paliwal and B. S. Atal, Speech ResearchDepartment, AT&T Bell Laboratories, Murray Hill, NJ07974, U.S.A.
18.9 ARDOSS: Auto Regressive Domain SpectralSubtraction for Robust Speech Recognition inAdditive Noise 1019Hugo Van Hamme, Lernout & Hauspie SpeechProducts N. V., Koning Albert I laan 64, B-1780,Wemmel, Belgium
18.10 Speech Recognition with Rapid EnvironmentAdaptaiton by Spectrum Equalization 1023Keizaburo Takagi, Hiroaki Hattori and TakaoWatanabe, Information Technology ResearchLaboratories, NEC Corporation, 4-1-1, Miyazaki,Miyamae-ku, Kawasaki, 216 Japan
18.11 Signal Processing for Robust Speech Recogni-tion 1027Richard M. Stern, Fu-Hua Liu, Pedro J. Moreno andAlejandro Acero, Dept. of Electrical, ComputerEngineering and School of Computer Science,Carnegie Mellon University, Pittsburgh, PA 15213,U.S.A.
18.12 A Comparison of Three Noisy Speech Recogni-tion Approaches 1031Olivier Siohan, Yifan Gong and Jean-Paul Haton,CRIN-CNRS & INRIA Lorraine, BP 239, 54506Vandoeuvre-les-Nancy, France
WEDNESDA Y MORNING Sep. 21Session 19: Speech Analysis
Time: 09:00 to 12:15, September 21, 1994Place: Room BChairpersons:
Gunnar Fant, Dept. of Speech Communication andMusic Acoustics, School of Engineering KTH,SwedenFumitada Itakura, Nagoya University, Japan
19.1 Nonlinear Speech Analysis Using the TeagerEnergy Operator with Application to SpeechClassification under Stress 1035Douglas A. Cairns and John H.L. Hansen, RobustSpeech Processing Laboratory, Dept. of ElectricalEngineering, Duke University, Box 90291, Durham,NC 27708-0291, U.S.A.
19.2 Analysis of Non-Linear Speech GeneratingDynamics 1039Paul A. Moakes and Steve W. Beet, Dept. ofElectronic and Electrical Engineering, University ofSheffield, P.O. Box 600, Mappin Street, Sheffield, S14DU, U.K.
19.3 Withdrawn19.4 Mel-Generalized Cepstral Analysis - A Unified
Approach to Speech Spectral Estimation 1043Kei-ichi Tokuda, Takao Kobayashi, Takashi Masukoand Satoshi Imai, Dept. of Electrical and ElectronicEngineering, Tokyo Institute of Technology, Tokyo,152 Japan
19.5 Combining Auditory Representations UsingFuzzy Sets 1047I.R. Gransden and S.W. Beet, Dept. of Electronicand Electrical Engineering, University of Sheffield,P.O. Box 600, Mappin Street, Sheffield, S1 4DU,U.K.
19.6 SBCOR Spectrum Taking AutocorrelationCoefficients at Integral Multiples of 1/CF intoAccount 1051Shoji Kajita and Fumitada Itakura, School ofEngineering, Nagoya University, 1, Furo-cho,Chikusa-ku, Nagoya 464-01 Japan
19.7 Pitch Extraction from Root Cepstrum 1055Hema A. Murthy, Dept. of Computer Science andEngineering, Indian Institute of Technology, Madras600 036, India
19.8 Voice Parameter Estimation Using SequentialSVD and Wave Shaping Filter Bank 1059Sung Hoon Hong, Sang Ki Kang and Sou Guil Ann,Dept. of Electronics Engineering, Seoul NationalUniversity, San 56-1, Shillim-dong, Kwanak-gu,Seoul, 151-742, Korea
19.9 Self Excited Threshold Auto-Regressive Modelsof the Glottal Pulse and the Speech Signal 1063Jean Schoentgen, Institute of Modern Languagesand Phonetics, CP110, Universite Libre deBruxel/es, Avenue F. D. Roosevelt, 50, B-1050Brussels, Belgium
19.10 Determination of Glottal Excitation Cycles forVoice Quality Analysis 1067Wolfgang J. Hess, Institute of CommunicationsResearch and Phonetics, University of Bonn,Poppelsdorfer Allee 47, D-53115 Bonn, Germany
19.11 Strategies for Voice Separation Based onHarmonicity 1071Alain De Cheveigne, Laboratoire de LinguistiqueFormelle, CNRS / Universite Paris 7, case 7003, 2place Jussieu, 75251, France
19.12 Speech Analysis Technique for PSOLA SynthesisBased on Complex Cepstrum Analysis andResidual Excitation 1075Yukio Mitome, Information Technology ResearchLaboratories, NEC Corporation, 4-1-1, Miyazaki,Miyamae-ku, Kawasaki, 216 Japan
WEDNESDA Y MORNING Sep. 21Session 20: Prosody of Discourse and Dialogue
-Production, Analysis, and Recognition-Time: 09:00 to 12:15, September 21, 1994Place: Room CChairpersons:
Hiroya Fujisaki, Dept. of Applied Electronics,Science University of Tokyo, JapanGosta Bruce, Dept. of Linguistics and Phonetics,Lund University, Sweden
20.1 Intonation Pattern with Focus and RelatedMuscle Activities in Tokyo Dialect 1079Shigeru Kiritani, Kikuo Maekawa and Hajime Hirose,Research Institute of Logopedics & Phoniatrics,Faculty of Medicine, University of Tokyo, 7-3-1,Hongo, Bunkyo-ku, 113 Japan
20.2 The Effects of Contrastive Accent and LexicalStress Upon Temporal Distribution in a Sentence 1083Jianfen Cao, Institute of Linguistics, ChineseAcademy of Social Sciences, 5 Jianguomennei Rd.,Beijing 100732, China
20.3 Speech Rate and Syllable Timing in SpontaneousSpeech 1087Henrietta J. Cedergren and Helene Perreault, Dept.de Linguistique, Universite du Quebec a Montreal,C.P. 8888, Montreal, Quebec, H3C3P8, Canada
20.4 An Experimental Phonetic Study of SpeechRhythm in Standard Korean 1091Hyun-bok Lee, Nam-taek Jin, Cheol-jae Seong,ll-jin Jung and Seung-mie Lee, Seoul NationalUniversity, Sinrim-dong, San-56, Gwanag-gu, Seoul151-742, Korea
20.5 A Rhythm Theory for Spontaneous Speech:The Role of Vowel Amplitude in the RhythmicHierarchy 1095Noriko Umeda and Toby Wedmore, Dept. ofLinguistics, New York Univeristy, NY, 10003, U.S.A.
20.6 Modelling Swedish Prosody in a DialogueFramework 1099Gosta Bruce, Bjorn Granstrom, Kjell Gustafson,David House and Paul Touati, Dept. of Linguisticsand Phonetics, Helgonabacken 12, S-22362 Lund,Sweden
20.7 Prosodic Characteristics of a Spoken Dialoguefor Information Query 1103Hiroya Fujisaki, Sumio Ohno, Masafumi Osame,Mayumi Sakata and Keikichi Hirose, Dept. ofApplied Electronics, Science University of Tokyo,2641, Yamazaki, Noda, 278 Japan
20.8 Analysis of Prosodic and Linguistic Features ofSpontaneous Japanese Conversational Speech 1107Shoichi Takeda, Yoshiyuki Itoh, Norifumi Sakumaand Kei Yokosato, Dept. of Information Systems,Teikyo University of Technology, 2289-23, Uruido,Ichihara, 290-01 Japan
20.9 Combining the Use of Duration and F0 in anAutomatic Analysis of Dialogue Prosody 1111Nick Campbell, ATR Interpreting Telecommunica-tions Research Laboratories, 2-2 Hikari-dai, Seika-cho, Kyoto, 619-02 Japan
20.10 Improving Parsing by Incorporating 'ProsodicClause Boundaries' into A Grammar 1115G. Bakenecker, U. Block, A. Batliner, R. Kompe, E.Noth and P. Regel-Brietzmann, Siemens AG, Otto-Hahn-Ring 6, 81730 MOnchen, Germany
20.11 A Prosodic Recognition Module Based on LinearDiscriminant Analysis 1119Andrew Hunt, Speech Technology Research Group,Dept. of Electrical Engineering, University of Sydney,NSW, 2006, Australia
20.12 Use of Prosodic Features in the Recognition ofContinuous Speech 1123Keikichi Hirose, Atsuhiro Sakurai and HiroyukiKonno, Dept. of Electronic Engineering, Faculty ofEngineering, University of Tokyo, 7-3-1, Hongo,Bunkyo-ku, Tokyo, 113 Japan
WEDNESDA Y MORNING Sep. 21Session 21: Spoken Language Cognition and Its
Disorders
Time: 09:00 to 12:15, September 21, 1994Place: Room D PosterChairpersons:
Pierre A. Halle, Laboratoire de PsychologieExperimentale, CNRS-Paris V, FranceKiyoshi Honda, ATR Human Information ProcessingResearch Laboratories, Japan
21.1 The Inconsistency of Consistency Effects inReading: The Case of Japanese Kanji Phonology 1127Taeko Nakayama Wydell and Brian Butterworth,Dept. of Psychology, University College London,Gower Street, London WC1X6BT, U.K.
21.2 An Acoustic Analysis of Unreleased StopConsonants in Word-Final Position 1131Valter Ciocca, Livia Wong and Lydia K.H. So, Dept.of Speech & Hearing Sciences, University of HongKong, 5/F, Prince Philip Dental Hospital, 34 HospitalRoad, Hong Kong
21.3 Speech Segmentation in Dutch: No Role for theSyllable 1135Jean Vroomen and Beatrice de Gelder, Dept. ofPsychology, Tilburg University, P.O. Box 90153,5000 LE Tilburg, The Netherlands
21.4 Do Ambiguous Fricatives Rhyme? LexicalInvolvement in Phonetic Decision-MakingDepends on Task Demands 1139James M. McQueen, Max-Planck-Institute forPsycholinguistics, Wundtlaan 1, 6525 XD Nijmegen,Netherlands
21.5 Moraic Segmentation in Japanese Revisited 1143P.A. Halle and J. Segui, Laboratoire de PsychologieExperimentale, CNRS-Paris V/28 rue Serpente,75006 Paris, France
21.6 Prosodic Information and Processing ofTemporarily Ambiguous Constructions inJapanese 1147
Jennifer J. Venditti and Hiroko Yamashita, Dept. ofLinguistics, Ohio State University, 222 Oxley Hall,1712 Neil Avenue, Columbus, OH 43210, U.S.A.
21.7 The Activation of Ambiguous Words During 21.20Spoken Language ComprehensionPatrizia Tabossi, R. de Almeida, M. Tanenhaus, M.Spivey-Knowlton and F. Zardon, Facolta' di Lettere eFilosofia, Universita' di Ferrara, Via Savonarola, 38Ferrara, Italy
21.8 Role of Prosodic Features in the Human Processof Speech Perception 1151 21.21Nobuaki Minematsu and Keikichi Hirose, Dept. ofElectronic Engineering, Faculty of Engineering,University of Tokyo, 7-3-1, Hongo, Bunkyo-ku,Tokyo, 113 Japan
21.9 Limitations of Lip-Reading Advantage by 21.22Desynchronizing Visual and Auditory Informationin Speech 1155Masahiro Hashimoto and Hideaki Seki, HumanInformation Laboratory, University of Occupationaland Environmental Health, 1-1, Iseigaoka,Yahatanishi-ku, Kitakyushu, 807 Japan 21.23
21.10 Word Meaning Deafness: Effects of Word Type 1159Sue Franklin, Judy Turner and Julie Morris, Dept. ofPsychology, University of York, Heslington, York,Y01 5DD, U.K.
21.11 Concept and Grammar Acquisition Based onCombining with Visual and Auditory Information 1163 21.24Mikio Masukata and Seiichi Nakagawa, Dept. ofInformation & Computer Sciences, ToyohashiUniversity of Technology, 1-1, Tenpaku, Toyohashi, 21.25441 Japan
21.12 The Punch and Judy Man: A Study of Phono-logical/Phonetic Variation 1167Gavin J. Dempster, Sheila M. Williams and SandraP. Whiteside, Centre for Language Engineering,Department of Computer Science and Speech 21.26Science Unit, University of Sheffield, Sheffield, S102TN, U.K.
21.13 The Auditory Perception of Children's Age andSex 1171Hartmut Traunmuller and Renee van Bezooijen, 21.27Institute of Linguistics, Stockholm University, S-10691 Stockholm, Sweden
21.14 Are Representations Used for Talker Identifica-tion Available for Talker Normalization? 1175James S. Magnuson, Reiko A. Yamada and HowardC. Nusbaum, ATR Human Information ProcessingLaboratories, 2-2, Hikaridai, Seika-cho, Soraku-gun, 21.28Kyoto, 619-02 Japan
21.15 Non-physiological Differences Between Male andFemale Speech : Evidence from the Delayed F0Fall Phenomenon in Japanese 1179 21.29Yoko Hasegawa and Kazue Hata, Dept. of EastAsian Languages, University of California atBerkeley, Berkeley, CA 94720, U.S.A.
21.16 Speaker Individualities in Speech SpectralEnvelopes 1183 21.30Tatsuya Kitamura and Masato Akagi, JapanAdvanced Institute of Science & Technology,Hokuriku, 15, Asahi-dai, Tatsunokuchi, Nomi,Ishikawa, 923-12 Japan
21.17 Prosodic Imitation : Productional Results 1187 21.31Duncan Markham, Dept. of Linguistics & Phonetics,Lund University, Helgonabacken 12, S-223 62 Lund,Sweden
21.18 Articulatory Description of Affricate Production in 21.32Speech Disordered Children Using Electro-palatography (EPG) 1191Fiona Gibbon and Bill Hardcastle, Dept. of SpeechLanguage Sciences, Queen Margaret College,Edinburgh, U.K.
21.19 A Phonetic and Phonological Analysis ofStuttering in Japanese 1195Akira Ujihiraand Haruo Kubozono, Dept. of Japa-nese, Osaka University of Foreign Studies, Minoo,Osaka 562 Japan
Perception, Production and Training of NewConsonant Contrasts in Children withArticulation Disorders 1199Donald G. Jamieson and Susan Rvachewl, HHCRU,University of Western Ontario, London, ON, CanadaN6G IHI and Speech Dept, Alberta Children'sHospital Calgary, AB, Canada T2T5C7Cognitive Processes of Speech Sounds in aBrain-Damaged Patient 1203Sachiko Nakakoshi, Atsushi Mizobuchi and HirotoKatori, Bokutoh Hospital, 4-23-15, Kohtoh-bashi,Sumida-ku, Tokyo, 130 JapanA Cross-Linguistic Study of Lateral Is/ UsingElectropalatography (EPG) 1207N. Suzuki, H. Dent, M. Wakumoto, F. Gibbon, K.Michi and W. Hardcastle, 1st Dept. of Oral &Maxillofacial Surgery, Showa University, 2-1-1,Kitasenzoku, Ohta-ku, Tokyo, 145 JapanProsody of Recurrent Utterances in AphasicPatients 1211Junko Matsubara, Toshihiro Kashiwagi, MorioKohno, Hirotaka Tanabe and Asako Kashiwagi,Dept. of Rehabilitation, Kyowakai Hospital, 1-24-1,Kishibe-kita, Suita, 564 JapanIntonation and Language Teaching 1215Virginia LoCastro, International Christian University(I.C.U.), 3-10-2, Osawa, Mitaka, 181 JapanLoss and Recovery of Dialectal Features in aCase of Pure Anarthria. A Longitudinale StudyDaniele Archambault and Mlchele Bergeron,Linguistique et Traduction, Universite de Montreal,C.P. 6128 Succ, A, Montreal, Quebec H3C 3J7,CanadaA Computer-aided Phonetic Instruction Systemfor South-Asian Languages 1219Tsuyoshi Nara and Peri Bhaskararao, ILCAA, TokyoUniversity of Foreign Studies, 4-51-21, Nishigahara,Kita-ku, Tokyo, 114 JapanRhythm Processing by a Patient with PureAnarthria: -Some Suggestions on the Role ofRhythm in Spoken Language Processing- 1223Morio Kohno, Junko Matsubara, Katsuko Higuchiand Toshihiro Kashiwagi, Kobe City University ofForeign Studies, 9-1, Gakuen-Higashi-machi, Nishi-ku, Kobe, 651-21 JapanJapanese Accentuation of Foreign Learners andIts Interlanguage 1227Nobuko Yamada, Faculty of Humanities, IbarakiUniversity, 2-1-1, Bunkyo, Mito, 310 JapanMechanisms Producing Recurring Utterances ina Patient with Slowly Progressive Aphasia 1231Masato Kaneko, Dept. of Rehabilitation, TokyoMetoropolitan Matsuzawa Hospital, 2-1-1,Kamikitazawa, Setagaya-ku, 156 JapanHyper Media for Spoken Language Education 1235Kiyokata Katoh, Takako Ayusawa, YikihiroNishinuma, Richard Harrison and Kikuko Yamashita,Tokyo Gakugei University, 4-1-1, Nukuikitamachi,Koganei, Tokyo, 184 JapanA Text-to-Speech System for Application byVisually Handicapped and Illiterate 1239P. Bhaskararao, Venkata N. Peri and VishwasUdpikar, Deccan College, Pune 411006, IndiaSemantic and Pragmatic Effects in SpeechProduction: A Developmental InvestigationJan Charles-Luce and Elvira Ragonese, Dept. ofCommunicative Disorders and Sciences and Centerfor Cognitive Science, State University of New York,Buffalo, NY 14260, U.S.A.
WEDNESDA Y MORNING Sep. 21Session 22: Spoken Language Systems and
Assessment
Time: 09:00 to 12:15, September 21, 1994Place: Room E PosterChairpersons:
Pietro Laface, Dipartimento di Automatica eInformatica, Politecnico di Torino, ItalyKazuyo Tanaka, Speech Processing Section,Electrotechnical Laboratory, Japan
22.1 Talker Localization and Speech RecognitionUsing a Microphone Array and a Cross-Powerspectrum Phase Analysis 1243D. Giuliani, M. Omologo and P. Svaizer, Instituto perla Ricerca Scientifica e Tecnologica, 1-38050 , Povodi Trento, Italy
22.2 System of Microphone Arrays and NeuralNetworks for Robust Speech Recognition inMultimedia Environments 1247Qiguang Lin, Ea-Ee Jan, Chi Wei Che and Bert deVries, CAIP Center, Rutgers University, Piscataway,NJ 08855, U.S.A.
22.3 Estimating Performance of Pipelined SpokenLanguage Translation Systems 1251Manny Rayner, David Carter, Patti Price and BertilLyberg, SRI International, Cambridge ComputerScience Research Center, Suite 23, Millers Yard,Cambridge CB2 IRQ, U.K.
22.4 Generation of Multi-Syllable Nonsense Words forthe Assessment of Korean Text-to-SpeechSystem 1255Cheol-Woo Jo, Kyung-Tae Kim and Yung-Ju Lee,Acoustic and Speech Group, Dept. of Control andInstrumentation Engineering, Changwon NationalUniversity, Changwon, KyeongNam 641-773, Korea
22.5 Voice Map: a Dialogue-based Spoken LanguageInformation Access System 1259Aruna Bayya, Michael Durian, Lori Meiskey,Rebecca Root, Randall Sparks and Mark Terry, U SWest Technologies, 4001 Discovery Drive, Boulder,CO 80303, U.S.A.
22.6 Development of a Document Preparation Systemwith Speech Command Using EDR ElectronicDictionaries 1263Shigenobu Seto and Kazuhiro Kimura, JapanElectronic Dictionary Research Institute, Ltd. (EDR),Kawasaki, 210 Japan
22.7 Radiological Reporting by Speech Recognition:The A.Re.S System 1267Bianca Angelini, Giuliano Antoniol, Fabio Brugnara,Mauro Cettolo, Marcello Federico, Roberto Fiutemand Gianni Lazzari, Instituto per la RicercaScientifica e Tecnologica, 1-38100, Povo, Trento,Italy
22.8 A Spoken Language System for InformationRetrieval 1271S. K. Bennacef, H. Bonneau-Maynard, J. L. Gauvain,L. Lamel and W. Minker, LIMSI-CNRS, BP 133,91403 Orsay cedex, France
22.9 Recogniser Response Modeling from Testing onSeries of Minimal Word Pairs 1275B0erge Lindberg, Center for Person Kommunikation,Aalborg University, Fredrik Bajers Vej 7, DK-9220Aalborg, Denmark
22.10 A Study on the Problems for Application of VoiceInterface Based on Word Recognition 1279Toshimitsu Minowa, Yasuhiko Arai, HisanoriKanasashi, Tatsuya Kimura and Takuji Kawamoto,Matsushita Communication Industrial Co., Ltd., 600,Sedo-cho, Midori-ku, Yokohama, 226 Japan
22.11 A Ul Design Support Tool for Multimodal SpokenDialogue System 1283Hiroyuki Kamio, Mika Koorita, Hiroshi Matsu'ura,Masafumi Tamura and Tsuneo Nitta, MultimediaEngineering Laboratories, Toshiba Corporation, 70,Yanagi-cho, Saiwai-ku, Kawasaki, 210 Japan
22.12 Multimodal Drawing Tool Using Speech, Mouseand Key-board 1287Takuya Nishimoto, Nobutoshi Shida, TetsunoriKobayashi and Katsuhiko Shirai, Dept. of ElectricalEngineering, Waseda University, 3-4-1 ,Okubo,Shinjuku-ku, Tokyo 169 Japan
22.13 Generation of Non-entry Words from Entries ofthe Natural Speech Database 1291Yasuhiko Arai, Toshimitsu Minowa, Hiroko Yoshida,Hirofumi Nishimura, Hiroyuki Kamata and TakashiHonda, Matsushita Communication Industrial Co.,Ltd., 600, Saedo-cho, Midori-ku, Yokohama, 226Japan
22.14 MECALLSAT: A Multimedia Environment forComputer-Aided Language Learning Incorporat-ing Speech Assessment Techniques 1295Pedro Gomez, Daniel Martinez, Victor Nieto andVictoria Rodellar, Departamento de Arquitectura yTecnologia de Sistemas Informaticos, UniversidadPolitecnica de Madrid, Campus de Montegancedo,s/n, Boadilla del Monte, 28660 Madrid, Spain
22.15 Improving Recognizer Acceptance ThroughRobust, Natural Speech Repair 1299Arthur E. McNair and Alex Waibel, School ofComputer Science, Carnegie Mellon University,5000 Forbes Avenue, Pittsburgh, PA 15213, U.S.A.
22.16 User Acceptance of Automatic Speech Recogni-tion in Telephone Services 1303David Fay, Principal Member of Technical Staff, GTELaboratories Inc., 40, Sylvan Road, Waltham, MA02254, U.S.A.
22.17 Identifying Salient Usability Attributes for Auto-mated Telephone Services 1307Stephen Love, Rachael T. Dutton, John C. Foster,Mervyn A. Jack and F. W. M. Stentiford, Departmentof Electrical Engineering, The University ofEdinburgh, 80 South Bridge, Edinburgh, U. K.
22.18 Word Complexity Measures in the Context ofSpeech Intelligibility Tests 1311Arnd Mariniak, Lehrstuhl fur AllgemeineElektrotechnik und Akustik, Ruhr-UniversitatBochum, 44780 Bochum, Germany
22.19 Recognition Accuracy Methods and Measures 1315Frank H. Wu and Monica A. Maries, U S WestTechnologies, 4001 Discovery Drive, Boulder, CO-80303, U.S.A.
22.20 A Feature-Profile for Application-Specific SpeechSynthesis Assessment and Evaluation 1319Ute Jekosch and Louis C.W. Pols, Lehrstuhl furallgemeine Elektrotechnik und Akustik, Ruhr-Universitat Bochum, 44780 Bochum, Germany
22.21 A Description Model for Speech AssessmentTests with Subjects 1323Thomas Hegehofer, Lehtstuhl fur allgemeineElektrotechnik und Akustik Ruhr-Universitat Bochum,44780 Bochum, Germany
22.22 VLSI Implementation of a Robust HybridParameter-Extractor and Neural Network forSpeech Decoding 1327Victria Rodellar, Antonio Diaz,, Jose Ma Gallardo,Virginia Peinado, Victor Nieto and Pedro Gomez,Dept. Arquitectura y Technologia de SistemsInform, Universidad Politecnica de Madrid, Campusde Montegancedo, s/n, Boadilla del Monte, 28660Madrid, Spain
22.23 An Objective Measure for Qualitatively AssessingLow-bit-rate Coded Speech 1331Toshiro Watanabe and Shinji Hayashi, NTT HumanInterface Laboratories, 3-9-11, Midori-cho,Musashino, 180 Japan
22.24 Performance Comparison of Recognition SystemsBased on the Akaike Information Criterion 1335Kazuhiko Ozeki, Dept. of Computer Science andInformation Mathematics, The University of Electro-Communications, 1-5-1, Chofugaoka, Chofu, 182Japan
22.25 Robust Speech Recognition in the Automobile 1339Nobutoshi Hanai and Richard M. Stern, Dept. ofElectrical and Computer Engineering and School ofComputer Science, Carnegie Mellon University,Pittsburgh, PA 15213, U.S.A.
22.26 On the Development of a Dictation Machine forSpanish: DIVO 1343Javier Macfas-Guarasa, Manuel A. Leandro, JoseColas, Alvaro Villegas, Santiago Aguilera and JoseM. Pardo, Dept. de Ingenieria Electronica, E.T.S.I.Telecommunicacion, Cuidad Universitaria, s/n28040 Madrid, Spain
22.27 Environmental Robustness in Automatic SpeechRecognition Using Physiologically MotivatedSignal Processing 1347Yoshiaki Ohshima and Richard M. Stern, Jr., TokyoResearch Laboratory, IBM Japan, Ltd., 1623-14,Shimotsuruma, Yamato, 242 Japan
WEDNESDA Y AFTERNOON Sep. 21Session 23 : Large Vocabulary/Speaker Indepen-
dent Speech Recognition
Time: 13:45 to 17:00, September 21, 1994Place: Room AChairpersons:
James Glass, Laboratory for Computer Science,Massachusetts Institute of Technology, U.S.A.Takao Watanabe, Human Language ResearchLaboratories, Information Technology ResearchLaboratories, NEC Corporation, Japan
23.1 A Dynamic Network Decoder Design for LargeVocabulary Speech Recognition 1351V. Valtchev, J.J. Odell, P.C. Woodland and S.J.Young, Engineering Dept., Cambridge University,Trumpington Street, Cambridge CB2 1PZ, U.K.
23.2 A Word Graph Algorithm for Large Vocaburary,Continuous, Speech Recognition 1355Hermann Ney and Xavier Aubert, Lehrstuhl furInformatik VI, RWTH Aachen, University of Technol-ogy, D-52056 Aachen, Germany
23.3 Fast Match for Segment-based Large VocabularyContinuous Speech Recognition 1359Mike Phillips and David Goddeau, Spoken Lan-guage Systems Group, Laboratory for ComputerScience, Massachusetts Institute of Technology,Cambridge, MA 02139, U.S.A.
23.4 Multiple-Pronunciation Lexical Modeling in aSpeaker Independent Speech UnderstandingSystem 1363Chuck Wooters and Andreas Stolcke, InternationalComputer Science, Institute 1947 Center St. Suite600, Berkeley, CA 94704, U.S.A.
23.5 MMIE Training for Large Vocabulary ContinuousSpeech Recognition 1367Yves Normandin, Roxane Lacouture and RegisCard in, Centre de Recherche Informatique deMontreal 1801, McGill College, Suite 800, MontrealH3A 2NA, Quebec, Canada
23.6 An Intelligent and Efficient Word-Class-BasedChinese Language Model for Mandarin SpeechRecognition with Very Large Vocabulary 1371Yen-Ju Yang, Sung-Chien Lin, Lee-Feng Chien, Keh-Jiann Chen and Lin-Shan Lee, Dept. of ComputerScience and Information Engineering, NationalTaiwan University, Taipei, Taiwan
23.7 Tree-Structured Speaker Clustering for Speaker-Independent Continuous Speech Recognition 1375Tetsuo Kosaka, Shoichi Matsunaga and ShigekiSagayama, ATR Interpreting TelecommunicationsResearch Laboratories, Hikari-dai, Seika-cho,Soraku-gun, Kyoto, 619-02 Japan
23.8 Compact-Size Speaker Independent SpeechRecognizer for Large Vocabulary Using"COMPATS" Method 1379Tatsuya Kimura, Hiroyasu Kuwano, Akira Ishida,Taisuke Watanabe and Shoji Hiraoka, MatsushitaResearch Institute Tokyo, Inc., 3-10-1, Higashimita,Tama-ku, Kawasaki, 214 Japan
23.9 A Keyword-Spotting Unit for Speaker-Independent Spontaneous Speech Recognition 1383Yasuyuki Masai, Jun'ichi Iwasaki, Shin'ichi Tanaka,Tsuneo Nitta, Masahiro Yao, Tomohiro Onogi andAkira Nakayama, Multimedia Engineering Laborato-ries, Toshiba Corporation, 70, Yanagi-cho, Saiwai-ku, Kawasaki, 210 Japan
23.10 KT-Stock: A Speaker-Independent Large-Vocabulary Speech Recognition System over theTelephone 1387Myoung-Wan Koo, Sang-Kyu Park, Kyung-Tae Kongand Sam-Joo Doh, Software Research Laboratories,Korea Telecom, 17, Umyon-dong, Seocho-gu,Seoul, 137-792 Korea
23.11 Speaker Independent Continuous SpeechRecognition Using an Acoustic-Phonetic ItalianCorpus 1391B. Angelini, F. Brugnara, D. Falavigna, D. Giuliani, R.Gretter and M. Omologo, Instituto per la RicercaScientifica e Technologica, 1-38050 Povo di Trento,Italy
WEDNESDA Y AFTERNOON Sep. 21Session 24: Perception and Structure of
Spoken Language
Time: 13:45 to 17:00, September 21, 1994Place: Room BChairpersons:
Dennis Norris, MRC Applied Psychology Unit, U.K.Yo'ichi Tohkura, ATR Human Information ProcessingResearch Laboratories, Japan
24.1 The Auditory Image Model as a Preprocessor forSpoken Language 1395Roy D. Patterson, Timothy R. Anderson and MichaelAllerhand, Medical Research Council, AppliedPsychology Unit, 15 Chaucer Rd., Cambridge, CB22EF, U.K.
24.2 Effects of Natural Auditory Feedback onFundamental Frequency Control 1399Hideki Kawahara, ATR Human Information Process-ing Research Laboratories, 2-2, Hikari-dai, Seika-cho, Soraku-gun, Kyoto, 619-02 Japan
24.3 Unified Architecture for Auditory Scene Analysisand Spoken Language Processing 1403Tomohiro Nakatani, Takeshi Kawabata and HiroshiG. Okuno, NTT Basic Research Laboratories, 3-1,Morinosato-Wakamiya, Atsugi, 243-01 Japan
24.4 Rhythmic Structure of Word Blends in English 1407Anne Cutler and Duncan Young, Max-Planck-Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, Netherlands
24.5 Perception for VCV Speech Uttered Simul-taneously or Sequentially by Two Talkers 1411Kazuhiko Kakehi and Kazumi Kato, Graduate Schoolof Human Informatics, Nagoya University, Furo-cho,Chikusa-ku, Nagoya, 464-01 Japan
24.6 Perception of Time-Compressed/ExpandedJapanese Words Depends on the Number ofPerceived Phonemes 1415Shigeaki Amano, NTT Basic Research Laboratories,3-1 Morisato Wakamiya, Atsugi, 243-01 Japan
24.7 The Effect of Overlap Position in PhonologicalPriming Between Spoken Words 1419Monique Radeau, Juan Segui and Jose Morais,Laboratoire de Psychologie Experimentale,Universite Libre de Bruxelles, 117, Avenue A. Buyl,1050 Brussels, Belgium
24.8 A Cognitive Model of Inferring Unknown Wordsand Uncertain Sound Sequence 1423Masuzo Yanagida, Dept. of Knowledge Engineering,Doshisha University, 1-3, Miyako-dani, Tatara,Tanabe-cho, Tsuzuki-gun, Kyoto, 610-03 Japan
24.9 A Moraic Nasal and a Syllable Structure inJapanese 1427Takashi Otake and Kiyoko Yoneyama, DokkyoUniversity, 1-1, Gakuen-cho, Soka, 340 Japan
24.10 Temporal Organization of Bimodal SpeechInformation 1431Paula M. T. Smeele, Anne C. Sittig and Vincent J.van Heuven, Dept. of Industrial Design Engineering,Delft University of Technology, Jaffalaan 9, 2628 BXDelft, Netherlands
24.11 The Use of Auditory and Phonetic Memories inthe Discrimination of Stop Consonants underAudio-Visual Presentation 1435Sumi Shigeno, Kitasato University, Sagamihara, 228Japan
WEDNESDA Y AFTERNOON Sep. 21Session 25: Voice Quality -Its Characterization
and Control-Time: 13:45 to 17:00, September 21, 1994Place: Room CChairpersons:
Hideki Kasuya, Faculty of Engineering, UtsunomiyaUniversity, JapanLouis CW Pols, Institute of Phonetic Science /IFOTTUniversity of Amsterdam, Netherlands
25.1 Controlling Voice Quality of Synthetic Speech 1439Inger Karlsson, Dept. of Speech Communicationand Music Acoustics, KTH, Box 70014, S-100 44Stockholm, Sweden
25.2 Voice Quality of Synthetic Speech: Representa-tion and Evaluation 1443Louis C.W. Pols, Institute of Phonetic Science/IFOTT, University of Amsterdam, Herengracht 338,1016 CG, Amsterdam, Netherlands
25.3 The Role of F0 and Duration in Signalling Affectin Japanese: Anger, Kindness and Politeness 1447Etsuko Ofuka, Helene Valbret, Mitch Waterman, NickCampbell and Peter Roach, Speech Laboratory,Dept. of Psychology, University of Leeds, LS2 9JT,U.K.
25.4 Voice Source Parameters in Continuous Speech.Transformation of LF-Parameters 1451Gunnar Fant, Anita Kruckenberg, Johan Liljencrantsand Mats Bavegard, Dept. of Speech Communica-tion and Music Acoustics, KTH, Box 70014,Stockholm 10044, Sweden
25.5 Speaking Style Conversion by Changing ProsodicParameters and Formant Frequencies 1455Masanobu Abe and Hideyuki Mizuno, NTT HumanInterface Laboratories, 1-2356, Take, Yokosuka,238-03 Japan
25.6 Voice Source and Vocal Trace CharacteristicsAssociated with Speaker Individuality 1459Hideki Kasuya, Xuan Tan and Chang-Sheng Yang,Faculty of Engineering, Utsunomiya University, 2753,Ishii-machi, Utsunomiya, 321 Japan
25.7 Phoneme-Level Voice Individuality Used inSpeaker Recognition 1463Sadaoki Furui and Tomoko Matsui, NTT HumanInterface Laboratories, 3-9-11, Midori-cho,Musashino, 180 Japan
25.8 Controllability of Voice Quality: Evidence fromPhysiological and Acoustic Observations 1467Satoshi Imaizumi, Hartono Abdoerrachman and SeijiNiimi, Research Instiute Logopedics Phoniatrics,Faculty of Medicine, University of Tokyo, 7-3-1,Hongo Bunkyo-ku, 113 Japan
25.9 Spectral Correlates of Breathiness and Rough-ness for Different Types of Vowel Fragments 1471Guus de Krom, Research Institute for Language &Speech (OTS), University of Utrecht Trans 10, 3512JK Utrecht, Netherlands
25.10 Analysis of Pitch Dependence of Pharyngeal,Faucal, and Larynx- height Voice Quality Settings 1475John H. Esling, Lynn Marie Heap, Roy C. Snell andB. Craig Dickson, Dept. of Linguistics, University ofVictoria, P.O.Box3045, Victoria, B.C. V8W3P4,Canada
WEDNESDA Y AFTERNOON Sep. 21Session 26: Neural Network and Connectionist
Approaches
Time: 13:45 to 17:00, September 21, 1994Place: Room D PosterChairpersons:
Allen L. Gorin, AT&T Bell Laboratories, U.S.A.Shigeru Katagiri, ATR Human Information Process-ing Laboratories, Japan
26.1 Minimum-error-rate Training of Predictive NeuralNetwork Models 1479Kyung Min Na, Jae Yeol Rheem and Sou Guil Ann,Dept. of Electronics Engineering, Seoul NationalUniversity, San 56-1, Shinlim-dong, Kwanak-gu,Seoul 151-742, Korea
26.2 Spoken Language Acquisition for Automated CallRouting 1483A. L. Gorin, H. Hanek, R. Rose and L. Miller, AT&TBell Laboratories, 600, Mountain Avenue, P.O.Box636, Murray Hill, NJ 07974-0636, U.S.A.
26.3 A Speech Recognition System Using bothAuditory and Afferent Pathway Signal Processing 1487Eliathamby Ambikairajah, Owen Friel and WilliamMillar, Dept. of Electronic Engineering, RegionalTechnical College, Athlone, Ireland
26.4 Using Gamma Filters to Model TemporalDependencies in Speech 1491Steve Renals and Mike Hochberg, EngineeringDept., Cambridge University, Cambridge CB2 1PZ,U.K.
26.5 Phone Recognition Using a Transition-Controlled, Segment-Based DP/MLP Hybrid 1495Jan Verhasselt and Jean-Pierre Martens, Electronics& Information Systems Dept., University of Gent,St.Pietersnieuwstraat 41, B-9000 Gent, Belgium
26.6 Large Vocabulary Continuous Speech Recogni-tion Using a Hybrid Connectionist - HHM System 1499Mike M. Hochberg, Steve J. Renals, A.J. Robinsonand D.J. Kershaw, Engineering Dept., CambridgeUniversity, Trumpington Street, Cambridge CB21PZ, U.K.
26.7 A Multi-State NN/HMM Hybrid Method for HighPerformance Speech Recognition 1503Dong Yu, Taiyi Huang and Dao Wen Chen, NationalLaboratory of Pattern Recognition, Institute ofAutomation, Chinese Academy of Sciences, P.O.Box 2728, Beijing 100080, China
26.8 A Continuous HMM Based Preprocessor forModular Speech Recognition Neural Networks 1507Fikret S. Gurgen, J. M. Song and R. W. King,Electrical Eng. Dept, Sydney University, NSW2006Australia
26.9 A System Integrating Connectionist and SymbolicApproaches for Spoken Language Understanding 1511Ying Cheng, Paul Fortier and Yves Normandin,Centre de Recherche Informatique de Montreal(CRIM), Suite 800, 1801, McGill College Ave.,Montreal H3A 2NA, Quebec, Canada
26.10 Recent Work in Hybrid Neural Networks and HMMSystems in CSR Tasks 1515Xavier Menendez-Pidal, Javier Ferreiros, Ricardo deCordoba and Jose M. Pardo, Dept. de IngenieriaElectronica, ETSI Telecommunicacion, UniversidadPolitecnica de Madrid, Ciudad Universitaria, s/n,28040, Madrid, Spain
26.11 Hidden Markov Models and Selectively TrainedNeural Networks for Connected Confusable WordRecognition 1519Jean-Francois Mari, Dominique Fohr, YolandeAnglade and Jean-Claude Junqua, CRIN-CNRS &INRIA Lorraine B.P. 239, F54506, Vandoeuvre-les-Nancy Cedex, France
26.12 Modeling Dynamics in Connectionist SpeechRecognition - The Time Index Model 1523Yochai Konig and Nelson Morgan, InternationalComputer Science Institute, 1947 Center St., Suite600 Berkeley, CA 94704, U.S.A.
26.13 Mandarin Syllables Recognition by SubsyllablesDynamic Neural Network 1527Dao Wen Chen, Xiao Dong Li, San Zhu, Dong Xin Xuand Taiyi Huang, National Laboratory of PatternRecognition, Institute of Automation, ChineseAcademy of Sciences, P.O. Box 2728, Beijing100080, China
26.14 Evaluation of Phonetic Feature Recognition witha Time-Delay Neural Network 1531Shigeki Okawa, Christoph Windheuser, FredericBimbot and Katsuhiko Shirai, Dept. of ElectricalEngineering, Waseda University, 3-4-1, Okubo,Shinjuku-ku, Tokyo 169 Japan
26.15 A Self Organizing Feature Map Based on theFisher Discriminant 1535E. Monte and J. Hernando, E. T.S.I. Telecomuni-cacio, P.O. Box30002, 08080Barcelona, Spain
26.16 Using Wavelet Dyadic Grids and Neural Networksfor Speech Recognition 1539Richard F. Favero and Fikret Gurgen, SpeechTechnology Research Group, Dept. of ElectricalEngineering, University of Sydney, Sydney, NSW2006, Australia
26.17 A Normalization Method of Prediction Error forNeural Networks 1543Hiroaki Hattori, Information Technology ResearchLaboratories, NEC Corporation, 4-1-1, Miyazaki,Miyamae-ku, Kawasaki, 216 Japan
26.18 Recurrent Neural Network Word Models for SmallVocabulary Speech Recognition 1547Philippe Le Cerf and Dirk Van Compernolle, K.U.Leuven-E.S.A.T., Kardinaal Mercierlaan 94, B-3001Heverlee, Belgium
26.19 A Novel Fuzzy Partition Model Architecture forClassifying Dynamic Patterns 1551Yoshinaga Kato and Shigeru Katagiri, ATR Interpret-ing Telecommunications Research Laboratories, 2-2, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-02Japan
26.20 Handling Missing Data in Speech Recognition 1555Martin Cooke, Phil Green and Malcolm Crawford,Dept. of Computer Science, University of Sheffield,Sheffield S10 2TN, U.K.
26.21 A New Probabilistic Framework for ConnectionistTime Alignment 1559Patrick Haffner, France Te'le'com, Cntr. Nat'ld"Etudes des 1'^communications, CNET/LAA/TSS/RCPBP40, 22301 Lannion, France
26.22 A Speech Recognition Model Using InternalDegrees of Freedom 1563Ken-ichi Iso, Information Technology ResearchLaboratories, NEC Corporation, 4-1-1, Miyazaki,Miyamae-ku, Kawasaki, 216 Japan
26.23 Adaptation of Neural Network Model: Comparisonof Multilayer Perceptron and LVQ 1567Dongxin Xu, Dao Wen Chen, Qian Ma, Bo Xu andTaiyi Huang, National Laboratory of Pattern Recogni-tion, Institute of Automation, Chinese Academy ofSciences, P.O. Box2728, Beijing 100080, China
26.24 Simplified Sub-Neural-Networks for AccuratePhoneme Recognition 1571Takuya Koizumi, Shuji Taniguchi, Ken-ichi Hattori,and Mikio Mori, Dept. of Information Science, FukuiUniversity, 3-9-1, Bunkyo, Fukui, 910 Japan
26.25 A Neural Network for Phonetically Decoding theSpeech Trace 1575Victoria Rodellar, Victor Nieto, Pedro Gomez, DanielMartinez and Mercedes Perez, Dept. Arquitectura yTecno. de Sistemas Inform., Universidad Politecnicade Madrid, Campus de Montegancedo s/n, Boadilladel Monte, 28660 Madrid, Spain
26.26 Noise Robust Speech Recognition Using aDynamic-Cepstrum 1579Kiyoaki Aikawa and Tsuyoshi Saito, ATR HumanInformation Processing Research Laboratories, 2-2,Hikari-dai, Seika-cho, Soraku-gun, Kyoto, 619-02Japan
WEDNESDA Y AFTERNOON Sep. 21Session 27: Speech Analysis and Enhancement
Time: 13:45 to 17:00, September 21, 1994Place: Room E PosterChairpersons:
Hisashi Wakita, Speech Technology Laboratory,Panasomic Technologies Inc., U.S.A.Hiroshi Omura, Electrotechnical Laboratory, Japan
27'.1 Telephone-Band Speech Enhancement Based onthe Fundamental Frequency ComponentCompensation 1583Toshiyuki Aritsuka and Yoshito Nejime, CentralResearch Laboratory, Hitachi Ltd., 1-280, Higashi-koigakubo, Kokubunji, 185 Japan
27.2 Reduction of Noise Level by SPAD (SpeechProcessing System by Use of Auto-DifferenceFunction) 1587Nobuyuki Kunieda, Tetsuya Shimamura, Jouji Suzukiand Hiroyuki Yashima, Dept. of Information andComputer Sciences, Saitama University, 255, Shimo-Okubo, Urawa, 338 Japan
27.3 An Algorithm to Reconstruct Wideband Speechfrom Narrowband Speech Based on CodebookMapping 1591Yuki Yoshida and Masanobu Abe, NTT HumanInterface Laboratories, 1-2356, Take, Yokosuka,238-03 Japan
27'.4 An HMM Based Cepstral-Domain SpeechEnhancement System 1595C.W. Seymour and M. Niranjan, Engineering,Department, Cambridge University, TrumpingtonStreet, Cambridge, CB2 1PZ, U.K.
27.5 Voice Adaptation Using Multi-Functional Trans-formation with Weighting by Radial BasisFunction Networks 1599Naoto Iwahashi and Yoshinori Sagisaka, SonyResearch Center, 6-7-35, Kita-Shinagawa,Shinagawa-ku, 141 Japan
27.6 A Dynamic-Window Weighted-RMS AveragingFilter Applied to Speaker Identification 1603Hong Tang, Xiaoyuan Zhu, lain Macleod, BruceMillar and Michael Wagner, TRUST Project, Re-search School of Information Sciences and Engi-neering Australian National University, Canberra,ACT, 0200, Australia
27.7 Quality Enhancement of Band Limited Speech byFilteriing and Multirate Techniques 1607Hiroshi Yasukawa, N7T Transmission SystemsLaboratories, Yokosuka, 238-03 Japan
27.8 Characteristics of Multi-Layer Perceptron Modelsin Enhancing Degraded Speech 1611T. T. Le, J. S. Mason and T. Kitamura, Dept. ofElectrical & Electronic Engineering, UniversityCollege of Swansea, Swansea, SA2 8PP, U.K.
27.9 A Time-Frequency Analysis Technique forSpeech Recognition Signal Processing 1615Adam B. Fineberg and Kevin C. Yu, Apple Com-puter, Inc., One Infinite Loop Cupertino CA 95014,U.S.A.
27.10 Estimation of the Glottal Pulseform Based onDiscrete All-Pole Modelling 1619Paavo Alku and Erkki Vilkman, Acoustics Laboratory,Helsinki University of Technology, Otakaari 5 A SF-02150 Espoo, Finland
27.11 Analysis and Detection of Double Talk inTelephone Dialogs 1623Hiroyuki Nishi and Mikio Kitai, NTT Human InterfaceLoboratories, 1-2356, Take, Yokosuka, 238-03Japan
27.12 A Self-Learning Approach to Transcription ofDanish Proper Names 1627Ove Andersen and Paul Dalsgaard, Center forPerson Kommunikation, Aalborg University, FredrikBajers Vej 7, DK-9220 Aalborg, Denmark
27.13 A Time-Varying Analysis Based on AnalyticSpeech Signals 1631Eisuke Horita, Yoshikazu Miyanaga and KojiTochinai, Faculty of Technology, KanazawaUniversity, Kanazawa, 920 Japan
27.14 New Spectrum Interpolation Method forImproving Quality of Synthesized Speech 1635Takashi Endo and Shun'ichi Yajima, CentralResearch Laboratory, Hitachi, Ltd., 1-280, Higashi-koigakubo, Kokubunji, 185 Japan
27.15 Automatic Context-sensitive Measurement of theAcoustic Correlates of Distinctive Features atLandmarks 1639Mark Johnson, Research Laboratory of Electronics,Massachusetts Institute of Technology, Cambridge,MA 02139, U.S.A.
27.16 A Comparison of Diffrent Acoustic andArticulatory Representations for the Determina-tion of Place of Articulation of Plosives 1643Alain Soquet and Marco Saerens, Institut desLangues Vivantes et de Phonetique, Universite Librede Bruxelles, 50 Avenue F. D. Roosevelt, B-1050Bruxelles, Belgium
27M An Analysis of Voice Quality Using SinusoidalModel 1647Naotoshi Osaka, Information Science ResearchLaboratory, NTT Basic Research Laboratories, 3-1,Morinosato, Atsugi, 243-01 Japan
27.18 Fast Formant Estimation of Children's Speech 1651A.A. Wrench, J.M.M. Watson, D.S. Soutar, A.G.Robertson and J. Laver, Center for Speech Technol-ogy Research, University of Edinburgh, SouthBridge, Edinburgh, EH1 1HN, U.K.
27.19 Some Fast Higher Order AR EstimationTechniques Applied to Parametric WienerFiltering 1655Josep M. Salavedra, Enrique Masgrau, AsuncionMoreno, Joan Estarellas and Javier Hernando, Dept.of Signal Theory and Communications, UniversitatPolitecnica de Catalunya, Apartat 30002, 08080Barcelona, Spain
27.20 A Complete Rule-Based Phonemic SynthesisSystem of Arabic TextN. S. Abd El-Kader, Electronics & CommunicationEngineering Dept., Faculty of Engineering, CairoUniversity, Cairo, Egypt
27.21 Applications of a Rule-based Speech SynthesizerModule 1659Mikio Yamaguchi, Shigeharu Toyoda and KatsuhiroYada, Multimedia & Communications TechnologyGroup, System & Electronics R & D Center,Sumitomo Electric Industries, Ltd., 1-1-3, Shimaya,Konohana-ku Osaka, 554 Japan
27.22 Quasi-Articulatory Formant Synthesis 1663Jon lies and William Edmondson, School of Com-puter Science, The University of Birmingham,Edgbaston, Birmingham, B15 2TT, U.K.
27.23 On the Connection Between Manual Segmenta-tion Conventions and "Errors" Made by Auto-matic Segmentation 1667Knut Kvale, Division of Telecommunications,Norwegian Telecom Research, P. O. Box 83 N-2007Kjeller, Norway
27.24 Natural Utterance Segmentation and DiscourseLabel Assignment 1671Mutsuko Tomokiyo, ATR Interpreting Telecommuni-cations Research Laboratories, 2-2, Hikari-dai,Seika-cho, Soraku-gun, Kyoto 619-02 Japan
27.25 Possibility of Speech Synthesis by CommonVoice Source 1675Satoshi Yumoto, Jouji Suzuki and TetsuyaShimamura, Dept. of Information and ComputerSciences, Saitama University, Shimo-okubo, 255,Urawa, 338 Japan
27.26 A Scheme for Chinese Speech Synthesis by RuleBased on Pitch-Synchronous Multi-PulseExcitation LP Method 1679Changfu Wang, Wenshen Ye, Keikichi Hirose andHiroya Fujisaki, Department of Electronic Engineer-ing, University of Science and Technology of China,Hefei, Anhui, 230026 P. R. China
27.27 Text Processing within a Speech SynthesisSystem 1683
27-28 E-Mail to Voice-Mail Conversion Using aPortuguese Text-to-Specch System 1687
27.29 Tempo Estimation by Wave Envelope forRecognition of Parahnguistic Features inSpontaneous Speech 1691
THURSDA Y MORNING Sep. 2228F: Acquisition of Spoken Language
1'\3C 3iR> •' n A
>ns:Pete'1.",1 j . v . - ,
''a1-, i' Pvfa\- U 5 A
Discrimination of English / r-l / and / w-y / byJapanese Infants at 6-12 Months: Language-Specific Developmental Changes in SpeechPerception Abilities 1695Te'ud*1 ' ' i i j r .pia f>,on'i. T j K ^ t i * i Midon SasakiSatut.r. Shirdki KarieiO N.r.r Mo'iC K O - P C PaulAMer , j * :V :! C-)"--jrnir Hn*' f i .•?.*;/i '..'"Jivws/v 77S
1699
1703
L/rdrta. <33fi ^anfi.';, Transition from Two-Word to Multiple-Word Stagein the Course of Language Acquisition 1707Tor op.kt,1 !i'j Depai''fant •.•' Eo-iCtitiO" Shizuoka
: BSLP Based Language Grammars for ChildSpeech 1711PV S Hao a'id Na'-d m Bondalu T.itd Institute o1
Funna"\iii >• Re$e.u-:n G<jitit\i bowroay-400005.Ir.itaUsing Prediction to Learn Pre-Linguistlc SpeechCharacteristics : A Connectionist Model 1715J0i"n N pra-«v \ -J L-eyir. McAuiev ComputerSciei ft- D'.-p; I'.-rji-ii1.-! U'-wWy. B:"X3rnington IN47405. USA
.-5 ('>.',: Generating Phoneme Models for FormingPhonological ConceptsHum*. K'J: ••,.!. KrV.j,-.i I ifrjr... --n j S-̂ to'uHay** i/u f .i* :• J.VJ1 .-K a-1 i.-if v'd.wv AISTt-l-i J"-3.\VM- Tdu--.net 'P:. . /drJ"Infant's Expression and Perception of EmotionThrough VocalizationsYoks Srn-nu'a anil S-r> -Mi ir .J .Ti..rr . Depr of
THURSDA Y MORNING Sap. 22Session 28S: Education of Spoken Language
Time: 1045to 12 15, Sentember22 1994Place: Room AChairpersons:
Takako Ayusawa The Navcna1 Language ResearchInstitute. JapanShigeru Satoh. Tohoku University. Japan
28S.1 Naturalness Judgments for Stressed VowelDuration in Second Language Acquisition 1719Michiko Mochi/uki-Sudo and Shigeru Kiritani,Juntendo University, Inba. Ouba 270-16 Japan
2BS.2 Pre-nuclear Intonation In Questions of JapaneseStudents in English 1723Margaret Maeda. Tokyo Woman's Christian Univer-sity. Mitaha. 131 Japan
285.3 Intonatlonal Properties of Adverbs in TokyoJapaneee 1727Junko 1 sumaki Faculty of Leitcs University o!Osaka. 1-5 Machikaneyarra Tovorana Osaka, 560Japan
285.4 Production and Perception of English SentencesSpoken by Japanese University Students 1731Ichi'cMiura Dept. ofEngiis* Kyoto UnversiiyofEducation Fukakusa. Fu;inornci-i:f'o. 1. Fusnimi-ku,Kyoto, 612 Japan
285.5 Using Morphological Analysis to ImproveJapanese Pronunciation 1735Atsuko Kikuchi and Wayr-o Lawrence, Dspl cMaanLanguages. University oi Auckland Private Bag92019, Auckland, New Zealand
285.6 How Do the French Perceive Tonal Accent InJapanese? Experimental Evidence 173BYukihira Nishinuma CNPS LIRA 261 Farc'e s<Language. Untve'SttG oe Provence. 13621 A'*-«n-Prcvence France
THURSDA Y MORNING Sep. 22Session 29: Speech Synthesis II
Time: 09.00 to 12 15. Sep:-.:rr.bei 22 1994Piece: Room BChairpersons:
Yaiuhiko Arai. Matsusnsa Co-ri^un-osiion industrialCo. Ltd. JapanWolfgang Hess, University o' Bonn Germany
29.1 Japanese Text-to-Speech Conversion Softwarefor Personal Computers 1743Kazuhiro Taka^asr, t\a/uriko Iwata Yumo M'turneand Keiko Ndgano, irtom^atio! rech'iologyResearch Laborator.es *iEC Corporation. 4-1-1,Miyazaki, Miyamae-kj, Kawasaki 216 Japan
29.2 Automatic Labeling of Speech SynthesisCorpora 1747Annemie Vorstermans anc Jean-Pierre Ma'tensELIS. University of Cent St -Pieiers-:euws:raat, 41.B-9000Gent Belgium
29.3 On Synthesis Units for Japanese Text-to-SpeechSynthesis 1751Yasushi ls»hikawa ana Kun:o Na^3jll^la Ccrpuerand Information Svstems Laboratory M-*supishiElectric Corporation, 5-1-1 Ofura Kamak^ra 247Japan
29.4 Inducing Concatenate Units from MachineReadable Dictionaries and Corpora for SpeechSynthesis 1755Jjdith L. Klavans and Eveyro Tzoukc?ri.a'T. AT&TBell Laboratories 600 MoJnwrA/e . MwrayHih'.NJ07974. USA.