Descriptional Complexity Descriptional Complexity

36
Ronald V. Book, Edwin Pednault, Detlef Wotschke (editors): Descriptional Complexity Dagstuhl-Seminar-Report; 63 03.05.-O7.05.93 (9318) Ronald V. Book, Edwin Pednault, Detlef Wotschke (editors): Descriptional Complexity Dagstuhl-Seminar-Report; 63 03.05.-07.05.93 (9318)

Transcript of Descriptional Complexity Descriptional Complexity

Page 1: Descriptional Complexity Descriptional Complexity

Ronald V. Book, Edwin Pednault,Detlef Wotschke (editors):

Descriptional Complexity

Dagstuhl-Seminar-Report; 6303.05.-O7.05.93 (9318)

Ronald V. Book, Edwin Pednault, Detlef Wotschke (editors):

Descriptional Complexity

Dagstuhl-Seminar-Report; 63 03.05.-07.05.93 (9318)

Page 2: Descriptional Complexity Descriptional Complexity

ISSN 0940-1121

Copyright © 1993 by IBF I GmbH, Schloss Dagstuhl, D-66687 Wadern, GermanyTeI.: +49-6871 - 2458Fax: +49-6871 - 5942

Das Internationale Begegnungs- und Forschungszentrum für Informatik (IBFI) ist eine gemein-nützige GmbH. Sie veranstaltet regelmäßig wissenschaftliche Seminare, welche nach Antragder Tagungsleiter und Begutachtung durch das wissenschaftliche Direktorium mit persönlicheingeladenen Gästen durchgeführt werden.

Verantwortlich für das Programm ist das Wissenschaftliche Direktorium:Prof. Dr. Thomas Beth.�Prof. Dr.�lng. José Encarnagao,Prof. Dr. Hans Hagen,Dr. Michael Laska,Prof. Dr. Thomas Lengauer,Prof. Dr. Wolfgang Thomas,Prof. Dr. Reinhard Wilhelm (wissenschaftlicher Direktor)

Gesellschafter: Universität des Saarlandes,Universität Kaiserslautern,Universität Karlsruhe,Gesellschaft für Informatik e.V.� Bonn

Träger: Die Bundesländer Saarland und Rheinland-Pfalz

Bezugsadresse: Geschäftsstelle Schloss DagstuhlUniversität des SaarlandesPostfach 1150

D-66041 Saarbrücken, GermanyTel.: +49 -681 - 302 - 4396Fax: +49 -681 - 302 - 4397

e-mail: [email protected]

ISSN 0940-1121

Copyright © 1993 by IBFI GmbH, Schloss Dagstuhl, 0 -66687 Wadern, Germany Tel. : +49-6871 - 2458 Fax: +49-6871 - 5942

Das lntemationale Begegnungs- und Forschungszentrum fur lnformatik (IBFI) ist eine gemein­nutzige GmbH. Sie veranstaltet regelmaBig wissenschaftliche Seminare, welche nach Antrag der Tagungsleiter und Begutachtung durch das wissenschaftliche Direktorium mit personlich eingeladenen Gasten durchgefuhrt warden.

Verantwortlich fur das Programm ist das Wissenschaftliche Direktorium: Prof. Dr. Thomas Beth., Prof. Dr.-lng. Jose Encama~o. Prof. Dr. Hans Hagen, Dr. Michael Laska, Prof. Dr. Thomas Lengauer, Prof. Dr. Wolfgang Thomas, Prof. Dr. Reinhard Wilhelm (wissenschaftlicher Direktor)

Gesellschafter: Universitat des Saarlandes, U niversitat Kaiserslautern, UniversiUU Karlsruhe, Gesellschaft fur lnformatik e.V., Bonn

Trager: Die Bundeslander Saarland und Aheinland-Pfalz

Bezugsadresse: Geschaftsstelle Schloss Dagstuhl Universitat des Saarlandes Postfach 1150 D-66041 Saarbrucken, Germany Tel.: +49 -681 - 302 - 4396 Fax: +49 -681 - 302 - 4397 e-mail: [email protected]

Page 3: Descriptional Complexity Descriptional Complexity

Ronald V. Book

Edwin Pednault

Detlef Wotschke

(editors)

Descriptional Complexity

Dagstuhl-Seminar May 3 - 7, 1993

Ronald V. Book

Edwin Pednault

Detlef Wotschke

(editors)

Descriptional Complexity

Dagstuhl-Seminar

May 3 - 7, 1993

Page 4: Descriptional Complexity Descriptional Complexity

Report

on Dagstuhl Seminar Nr. 9318 on

Descriptional Complexity: A Multidisciplinary Perspective

Descriptional complexity is a highly multidisciplinary �eld, with contributionsbeing made in theoretical computer science, arti�cial intelligence, statistics, in-formation theory, physics, perceptual psychology, and neurophysiology. However,research efforts in these areas have historically been isolated from each other bydisciplinary boundaries. There has been relatively little interdisciplinary inter-action and exchange of results despite the fact that the origins of descriptionalcomplexity date back over 30 years.

The purpose of this seminar was to improve interdisciplinary interaction by en-couraging such interaction among a small group of leading scientists from severaldisciplines. The seminar programme included tutorial presentations on the is-sues each group is addressing, presentations of recent research results, moderateddiscussions, and many informal discussions as are customary in the conduciveDagstuhl atmoshpere. Through stimulating interactions among the participantswe hope to have promoted future interdisciplinary interactions in the �eld as awhole.

The midterm and longterm goals of this Dagstuhl seminar can thus be summa-rized as follows:

GOALS:

(1) To promote research in all aspects of descriptional complexity throughconferences, publications, and more informal means of scienti�c interaction;

(2) To promote interaction and the exchange of information across traditionaldiscipline boundaries;

(3) To provide a point of contact for all researchers in all disciplines interestedin descriptional complexity and its applications.

In order to achieve the above goals, this Dagstuhl seminar focussed on the fol-luwing

SCIENTIFIC SCOPE:

"File scop-~ of this seminar encompassed many aspects of descriptional complexity,both then� ry and application. These aspects included but were not limited to:

Report

on Dagstuhl Seminar Nr. 9318 on

Descriptional Complexity: A Multidisciplinary Perspective

Descriptional complexity is a highly multidisciplinary field, with contributions being made in theoretical computer science, artificial intelligence, statistics, in­formation theory, physics, perceptual psychology, and neurophysiology. However, research efforts in these area.s have historically been isolated from each other by disciplinary boundaries. There ha.s been relatively little interdisciplinary inter­action and exchange of results despite the fa.et that the origins of descriptional complexity date back over 30 years.

The purpose of this seminar was to improve interdisciplinary interaction by en­couraging such interaction among a small group of leading scientists from several disciplines. The seminar programme included tutorial presentations on the is­sues each group is addressing, presentations of recent research results, moderated discussions, and many informal discussions as are customary in the conducive Dagstuhl atmoshpere. Through stimulating interactions among the participants we hope to have promoted future interdisciplinary interactions in the field a.s a whole.

The midterm and longterm goals of this Dagstuhl seminar can thus be summa­rized as follows:

GOALS:

(1) To promote research in all aspects of descriptional complexity through conferences, publications, and more informal means of scientific interaction;

(2) To promote interaction and the exchange of information across traditional discipline boundaries;

(3) To provide a point of contact for all researchers in all disciplines interested in descriptional complexity and its applications.

In order to achieve the above goals, this Dagstuhl seminar focussed on the fol-1,Jwing

SCIENTIFIC SCOPE:

1 ·he scop-· of tlus seminar encompassed many aspects of descriptional complexity, both the, ry and <l.pplicatton. These aspects included but were not limited to:

4

Page 5: Descriptional Complexity Descriptional Complexity

(1) Generalized descriptional complexity measures and their properties, inc1ud~ing resource-bounded complexity, structural complexity, hierarchical com-plexity, trade-offs in succinctness and the complexity of sets, languages,grammars, automata, etc.;

(2) Algorithmic and other descriptional theories of randomness;

(3) The use of descriptional randomness and associated descriptional complex~ity measures in computational complexity, economy of description, cryp-tography, information theory, probability, and statistics;

(4) Descriptional complexity measures for inductive inference and predictionand the use of these measures in machine learning, computational learn-ing theory, computer vision, pattern recognition, statistical inference, andneural networks.

Frankfurt/ Main, August 1993 Detlef Wotschke

(1) Generalized descriptional complexity measures a.nd their properties, includ­ing resource-bounded complexity, structural complexity, hiera.rchical com­plexity, tra.de-offs in succinctness a.nd the complexity of sets, languages, grammars, automa.ta., etc.;

(2) Algorithmic a.nd other descriptional theories of randomness;

(3) The use of descriptional randomness a.nd associated descriptional complex­ity measures in computational complexity, economy of description , cryp­tography, information theory, probability, and statistics;

( 4) Descriptional complexity measures for inductive inference and prediction and the use of these measures in machine learning, computational learn­ing theory, computer vision, pattern recognition, statistical inference, and neural networks.

Frankfurt/ Ma.in, August 1993 Detlef Wotschke

5

Page 6: Descriptional Complexity Descriptional Complexity

Participants:

José L. Balcézar, Universidad Politecnica de Cataluna

Andrew R. Barron, Yale University

Ronald V. Book, University of California at Santa Barbara

Alexander Botta, Adaptive Computing Inc.

Imre Csiszér, Hungarian Academy of Sciences

Carsten Damm, Universität Trier

Jürgen Dassow, Technische Universität Magdeburg

Joachim Dengler, Vision Systems

Josep Diaz, Universidad Politecnica de Cataluna

Peter Gécs, Boston University

Läszlö Gerencsér, Hungarian Academy of Sciences

Jozef Gruska, Universität Hamburg

Juris Hartmanis, Cornell University

Günter Hotz, Universität Saarbrücken

Tao J iang, McMaster University

Helmut J ürgensen, University of Western Ontario

Hing Leung, New Mexico State University

Ming Li, University of Waterloo

Jack H. Lutz, Iowa State University

Christoph Meinel, Universität Trier

Pekka Orponen, University of Helsinki

Edwin Pednault, AT&T Bell Laboratories

Richard E. Stearns, SUNY at Albany

John Tromp, CWI - Mathematisch Centrum

Vladimir A. Uspensky, Moscow State Lomonosov University

Michiel van Lambalgen, University of Amsterdam

Paul Vitényi, CWI � Mathematisch Centrum

Andreas Weber, Universität Frankfurt

Detlef Wotschke, Universität Frankfurt

Participants:

Jose L. Balcazar, Universida.d Politecn.ica. de Ca.ta.luna.

Andrew R. Barron, Ya.le University

Ronald V. Book, University of California. at Santa Barbara.

Alexander Botta, Adaptive Computing Inc.

Im.re Csiszar, Hungarian Academy of Sciences

Carsten Damm, Universitat Trier

Jurgen Dassow, Technische Universitat Magdeburg

Joachim Dengler, Vision Systems

Josep Diaz, Universidad Politecnica de Cataluna

Peter Gacs, Boston University

Laszlo Gerencser, Hungarian Academy of Sciences

Jozef Gruska, Universitat Hamburg

Juris Hartmanis, Cornell University

Gunter Hotz, Universitat Saa.rbriicken

Tao Jiang, McMa.ster University

Helmut Jurgensen, University of Western Ontario

Hing Leung, New Mexico State University

Ming Li, University of Waterloo

Jack H. Lutz, Iowa. State University

Christoph Meinel, U niversitat Trier

Pekka. Orponen, University of Helsinki

Edwin Pednault, AT&T Bell Laboratories

Richard E. Stearns, SUNY at Albany

John Tromp, CWI - Mathematisch Centrum

Vladimir A. Uspensky, Moscow State Lomonosov University

Michiel van Lambalgen, University of Amsterdam

Paul Vitanyi, CWI - Ma.thematisch Centrum

Andreas Weber, Universitat Frankfurt

Detlef Wotschke, Universitat Frankfurt

6

Page 7: Descriptional Complexity Descriptional Complexity

List of Abstracts:

J. L. Balcézarz The Complexity of Algorithmic Problems on Succinct Instances

A. R. Barron: Asymptotic Optimality of Minimum Complexity Estimation

R. V. Book: Complexity Classes and Randomness

A. Botta: Basic Regularities

I. Csiszär: Redundancy rates for universal coding

C. Damm, K. Lenz: Symmetric Functions in AC°[2]

J. Dassow: Descriptional Complexity of Grammar Systems

J. Dengler: Stochastic Complexity of Orthonormal Function Systems and Ap-plications in Image Analysis

J. Dfaz: The Query Complexity of Learning DFA

P. Gécs: The Boltzmann Entropy and Randomness Test

L. Gerencsér: Stochastic Complexity and Optimal Misspeci�cation

G. Hotz: Some Observations related to Kolmogorov Complexity

T. Jiang: k-One Way Heads Cannot Do String Matching � a Lower Bound byKolmogorov Complexity

H. J ürgensen: Randomness of Number Representations

H. Leung: Separating exponentially ambiguous NFA from polynomially ambigu-ous NFA

M. Li: Average-case Complexity of Heapsort

J. H. Lutz: The Utility and Depth of Information

Ch. Meinel: Frontiers of Feasible Boolean Manipulation

P. Orponen: Instance complexity

E. Pednault: Application of Descriptional Complexity to Computer Vision andMachine Learning

R. E. Stearns: An Algebraic Model for Combinatorial Problems

J. Tramp: On a conjecture by Orponen + 3

V. A. Uspensky: An Invitation for Retrospectives and Perspectives

M. van Lambalgenz An Axiomatisation of Randomness

P. Vitényi: Introduction to Kolmogorov Complexity and its Applications

P. Vittinyi: Thermodynamics of Computation and Information Distance

A. Weber: Economy of Description for Single-valued Transducers

D. Wotschke: Descriptional Complexity for Automata, Languages and Cram-mars

List of Abstracts:

J. L. Balcazar: The Complexity of Algorithmic Problems on Succinct Instances

A. R. Barron: Asymptotic Optimality of Minimum Complexity Estimation

R. V . Boo!c: Complexity Classes and Randomness

A. Botta: Basic Regularities

I. Csiszar: Redundancy rates for universal coding

C. Damm, K. Lenz: Symmetric Functions in AC0 [2]

J . Dassow: Descriptional Complexity of Grammar Systems

J . Dengler: Stochastic Complexity of Orthonormal Function Systems and Ap­plications in Image Analysis

J . Diaz: The Query Complexity of Learning DFA

P. Gacs: The Boltzmann Entropy and Randomness Test

L. Gerencse r: Stochastic Complexity and Optimal Misspecification

G. Hotz: Some Observations related to Kolmogorov Complexity

T. Jiang: k-One Way Heads Cannot Do String Matching - a Lower Bound by K olmogorov Complexity

H. Jurgensen: Randomness of Number Representations

H. Leung: Separating exponentially ambiguous NFA from polynomially ambigu­ous NFA

M. Li: Average-case Complexity of Heapsort

J. H. Lutz: The Utility and Depth of Information

Ch. Meinel: Frontiers of Feasible Boolean Manipulation

P. Orponen: Instance complexity

E. Pednault: Application of Descriptional Complexity to Computer Vision a.nd Machine Learning

R. E. Stearns: An Algebraic Model for Combinatorial Problems

J. Tromp: On a conjecture by Orponen + 3

V. A. Uspensky: An Invitation for Retrospectives and Perspectives

M. van Lamhalgen: An Axiomatisation of Randomness

P. Vitanyi: Introduction to Kolmogorov Complexity and its Applications

P. Vitanyi: Thennodynamics of Computation and Infonnation Distance

A. Weber: Economy of Description for Single-valued Transducers

D. Wotschke: Descriptional Complexity for Automata, Languages and Gram­mars

7

Page 8: Descriptional Complexity Descriptional Complexity

Abstracts:

The Complexity of Algorithmic Problems on Succinct In-stances

José L. Balczizar, Universidad Politécnica de Cataluna

(Joint work with A. Lozano and J. Torzin)

Highly regular combinatorial objects can be represented advantageously by somekind of description shorter than their full standard encoding. A natural schemefor such succinct representations is by means of Boolean circuits. The complexityof many algorithmic problems changes drastically when this succinct represen-tation is used. Results quantifying exactly this increase of complexity are pre-sented, and applied to show that previous results in the area can be interpretedas sufficient conditions for completeness in the logarithmic time and countinghierarchies.

Asymptotic Optimality of Minimum Complexity Estima-tion

Andrew R. Barron, Yale University

The minimum description length principle estimates a probability density func-tion from a random sample by minimizing the total description length over aclass of candidate probability densities (for each candidate density, data are de-scribed by first giving a preamble to specify the density, then giving the Shannoncode for the data based on the candidate density). As shown in [1], an indexof resolvability characterizes the best trade-off between complexity and accu-racy of approximations to the true density. This resolvability bounds both theredundancy of the codes and the accuracy of the estimated density. In casespopularized by Jorma Rissanen, the class of candidate densities for descriptionis a sequence of �nite-dimensional smooth parametric families with parametervectors restricted to a grid spaced at width proportional to 1/\/1-2. in each coor-dinate, where n is the sample size. The dominant term in the codelength forthe candidate densities is m/ 2 log n, where m is the dimension of the parametervector. This codelength provides the minimum redundancy when the true den-sity is in such a family. In contrast, when the families provide approximationto the true density (as in the case of series expansion of the log-density [2]),the redundancy and the statistical convergence rate of the model selected bythe criterion with penalty m/ 2 log n is of order (log n/n)2�/9&#39;�) (where s is the

Abstracts:

The Complexity of Algorithmic Problems on Succinct In­stances

Jose L. Balcazar, Universidad Politecnica de Cataluna

(Joint work with A. Lozano and J. Toran)

Highly regular combinatorial objects can be represented advantageously by some kind of description shorter than their full standard encoding. A natural scheme for such succinct representations is by means of Boolean circuits. The complexity of many algorithmic problems changes drastically when this succinct represen­tation is used. Results quantifying exactly this increase of complexity are pre­sented, and applied to show that previous results in the area can be interpreted as sufficient conditions for completeness in the logarithmic time and counting hierarchies.

Asymptotic Optimality of Minimum Complexity Estima­tion

Andrew R . Barron, Yale University

The minimum description length principle estimates a probability density func­tion from a random sample by minimizing the total description length over a class of candidate probability densities (for ea.eh candidate density, data a.re de­scribed by first giving a preamble to specify the density, then giving the Shannon code for the data based on the candidate density). As shown in [1], an index of resolvability characterizes the best trade-off between complexity and accu­racy of approximations to the true density. This resolvability bounds both the redundancy of the codes and the accuracy of the estimated density. In cases popularized by Jorma. Rissanen, the class of candidate densities for description is a sequence of :finite-dimensional smooth parametric families with parameter vectors restricted to a grid spaced at width proportional to 1/ vn in each coor­dinate, where n is the sample size. The dominant term in the codelength for the candidate densities is m/2 log n, where m is the dimension of the parameter vector. This codelength provides the minimum redundancy when the true den­sity is in such a family. In contrast, when the families provide approximation to the true density ( as in the case of series expansion of the log-density [2]), the redundancy a.nd the statistical convergence rate of the model selected by the criterion with penalty m/2 log n is of order (log n/n)2•/(2•+1) (where sis the

8

Page 9: Descriptional Complexity Descriptional Complexity

number of derivations� of the log-density assumed to be square integrable) com-pared to the optimal statistical convergence rate of order 1 / n2&#39;/ (2&#39;+1) achievableby models selected by«Ak�aike&#39;s� information criterion, which uses a penalty of justm. This discrepancy leads us to ask whether it is necessary to abandon the datacompression framework to achieve optimal inference. However, by restrictingthe candidate parameters 9;. in the_ series expansion to a grid contained in theellipse 22:1 Gikz� S 1&#39;2 for some radiusr > 0, the codelength can be taken tobe order m (instead of m/ 2 log ��V� for codes that provide the best resolvability.In this case we have a minimum description length criterion that achieves thestatistically optimal rate of convergence. L

[1] A.R. Barron, T.M. Cover, Minimum complexity density estimation, IEEE IT31,1991.

[2] A.R. Barron, C.H. Shen, Approximations of densities by sequences of expo-nential families, Ann. Statist. 19, 1991

Complexity Classes and Randomness

Ronald V. Book, University of California Santa Barbara

Let R be a reducibility. Let ALMOST-�R denote the class

ALMOST-72 2: {A | for ALMOST every B, A 6 72(8) %•�

It is known that

ALMOST� ggf, ALMOST- 35�ALMOST- <1; BPP,ALMOST�- 3;? = AM,ALMOST� g{;H = PH.

l �

Book, Lutz, and Wagner proved that for suitable 72, ALMOST-72 : &#39;R,( RAND)F.1REC, where RAND denotes the class of languages whose characteristic functionsare algorithmically random in the sense of Martin-Lof, and REC denotes the classof recursive languages. Thus, ALMOST-R : (UB¬RAND�R,(B)) (�I REC.

The new result presented here is the random oracle characterization: for suitablereducibility �R, ALMOST-�R. -= 72(8) � REC for every B E RAND.

number of derivations, of the log-density assumed to be square integrable) com­pared to the optimal sta.tisticcrl convergence rate of order 1/ n 2•/ (

2•+1) achievable

by models selected by AJcaike·s• information criterion, which uses a penalty of just m . This discrepancy l~ad~· li& to ask whether it is ~ecessary to abandon the data compression framework to ·achieve optimal inference. However, by restricting the candidate para.meters 0k in the series expansion to a grid contained in the ellipse L~=l ef k2

' ~ r 2 for SOflle radius T > o, the codelength can be taken to be order m (instead of m / 2 log n) for codes that provide the best resolvability. In this case we have a minimum description length criterion that achieves the statistically optimal rate of convergen~e.

[1] A.R. Barron, T.M. Cover, Minimum complexity density estimation, IEEE IT 31, 1991.

[2] A.R. Barron, C.H. Shen, Approximations of densities by sequences of expo­nential families, Ann. Sta.tist. 19, 1991

C omplexity C lasses and Randomness

Ronald V. Book, University of California Santa Barbara

Let 'R, be a reducibility. Let ALMOST-'R denote the class

ALMOST-'R ·- {AI for ALMOST every B, A E R(B) } .

It is known that

ALMOST - ~i't ALMOST - ~:rr P,

ALMOST- ~: = BPP,

ALMOST - ~!f.P AM,

ALMOST- ~:n = PH.

Book, Lutz, and Wagner proved that for suitable n, ALMOST-n = n(RAND) n REC, where RAND denotes the class of languages whose characteristic functions are algorithmically random in the sense of Martin-Lo{, and REC denotes the class of recursive languages. Thus, ALMOST-n = (UsERAND 'R(B)) n REC.

The new result presented here is the random oracle characterization: for suit ~ hie• reducibility n, ALMOST-n --= R(B) n REC for every B E RAND.

9

Page 10: Descriptional Complexity Descriptional Complexity

Using the random oracle characterization the following can be shown: if thereexists B E RAND such that the polynomial time hierarchy relative to B collapses,then the (unrelativized) polynomial-time hierarchy collapses.

Basic Regularities

Alexander Botta, Adaptive Computing Inc.

We present an approach to unsupervised concept formation based on accumu-lation of partial regularities. Using an algorithmic complexity framework, wede�ne regularity as a model that achieves a compressed coding of data. We dis-cuss induction of models. We present induction of �nite automata models forregularities of strings and induction of models based on vector transitions forsets of points. In both cases we work on structures that accept a decompositioninto recurrent, recognizable parts. These structures are usually hierarchical andsuggest that a vocabulary of basic constituents can be learned before focussingon how they are assembled. We de�ne basic regularities as algorithmically inde-pendent building blocks for structures with reference to a particular model class.We show that they are identi�able as local maxima of the compression factor asa function of model complexity. We illustrate stepwise induction that consists of�nding a basic regularity model, using it to compress and encode the data, thenapplying the same procedure on the code.

Redundancy rates for universal coding

Imre Csiszar, Mathemathical Institute, Hungarian Academy of Sci-ences

Given a �nite alphabet X, a code is a mapping (ß : X* �+ {O,1}* such thatfor every �xed n, the codewords ¢(:c),:c E X" satisfy the pre�x property. Thegoodness of a code ¢ for a stochastic model P (a consistent probability assignmentto the strings z E X�) can be characterized by the behaviour of the (per letter)mean or max redundancy, viz.

f; Z P<z>u am) I +1og.P<z>> <1)zEX�

or - max (I <l>(w) I +1032 P(2=))- (2)

Using the random oracle characterization the following can be shown: if there exists B E RAND such that the polynomial time hierarchy relative to B collapses, then the (unrelativized) polynomial-time hierarchy collapses.

Basic Regularities

Alexander Botta, Adaptive Computing Inc.

We present an approach to unsupervised concept formation based on accumu­lation of partial regularities. Using an algorithmic complexity framework, we define regularity as a model that achieves a compressed coding of data. We dis­cuss induction of models . We present induction of finite automata models for regularities of strings and induction of models based on vector transitions for sets of points. In both cases we work on structures that accept a decomposition into recurrent, recognizable parts. These structures are usually hierarchical and suggest that a vocabulary of basic constituents can be learned before focussing on how they are assembled. We define basic regularities as algorithmically inde­pendent building blocks for structures with reference to a particular model class. We show that they are identifiable as local maxima of the compression factor as a function of model complexity. We illustrate stepwise induction that consists of finding a basic regularity model, using it to compress and encode the data, then applying the same procedure on the code.

Redundancy rates for universal coding

Imre Csiszar, Mathemathical Institute, Hungarian Academy of Sci­ences

Given a finite alphabet X , a code is a mapping </> : x• -+ {O, l}' such that for every fixed n, the codewords </>(z) , ;z; E xn satisfy the prefix property. The goodness of a code </> for a stochastic model P ( a consistent probability assignment to the strings z E X• ) can be characterized by the behaviour of the (per letter) mean or max redundancy, viz.

(1)

or (2)

10

Page 11: Descriptional Complexity Descriptional Complexity

A code is called strongly universal for a model class, in the mean or maxsense, if we have R� �-> 0 or R; ��> 0, respectively, where �R�,, and R; denotethe suprema of the redundancies (1) and (2) as P ranges over the given modelclass. For several model classes involving a finite number of parameters, suchas the class of the i.i.d. models or the Markov models, the best possible R�and R; are known (up to a constant), and are of order log � Very little isknown, however, about universal codes for non-parametric model classes. Severalpreliminary results, obtained jointly with P. Shields (University of Toledo, Ohio)are stated below.

/

A renewal model corresponding to a distribution @K6� on IN is a model Pwith alphabet X = {O, 1} such that ifs: : ugul . �f-um with u,- being a string of�,3� zeros followed by a one then P(:c) = P(uof) :&#39;_�__1 �92�

Theorem 1. For the class of renewal models, there exist strongly max-universalcodes with R; = O(\/7�z)� and there do not exist strongly mean-universal codeswith R: = O(n1/3).

Now let X be any �nite set, and consider the models P such that if :13 z: uvw,| v |2 l, then P(2:) 5 kP(u)P(w), where I and k are �xed.

Theorem 2. For the above mentioned class, there exist strongly max-universalcodes with R; : O(1/log �`.�

Symmetric Functions in AC°[2]

Carsten Damm, Universität Trier

Katja Lenz, Universität Frankfurt

We consider symmetric Boolean functions computable by AC°[2]-circuits � poly-nomial size, constant depth circuits with AND-, OR-, and PARITY-gates. Ifthe depth is restricted to 2 or if PARITY-gates are disallowed, characterizationsof these functions are already known (see [1] and ò� Also for related modelssimilar characterizations are known (see �+!� All these characterizations exhibitone interesting phenomenon: if the depth of the circuits is restricted to be 2, thesymmetric functions computable by this type of circuits are all those that havea constant bound on a certain complexity measure. If arbitrary constant depthis allowed, the functions are characterized by having a polylogarithmic bound onthat same measure. The question arises if that magic polylog bound holds also inthe case of AC °[2]. In the depth-2-case the properties of a certain transformation(the well known Reed-Muller-transformation) played a central role. Under theassumption that these properties continue to hold in the general constant depth

11

A code is called strongly universal for a model class, in the mean or max sense, if we have Rn -t O or R~ -t 0, respectively, where Rn and R~ denote the suprema of t he redundancies ( 1) and (2) as P ranges over t he given model class. For several model classes involving a finite number of parameters, such as the class of the i.i.d. models or the Markov models , the best possible Rn and R~ are known (up to a constant), and are of order log n/n. Very little is known, however, about universal codes for non-parametric model classes. Several preliminary results, obtained jointly with P. Shields (University of Toledo, Ohio) are stated below.

A renewal model corresponding to a distribution { Q( k)} on IN is a model P with alphabet X = {O, 1} such that if :z: = u0u 1 .. ~um '1,'.ith Ui being a string of j (i) zeros followed by a one then P(x) = P(uo) 0~1 Q(j(i)). -

Theorem 1. For the class of renewal models, t here exist strongly max-universal codes with R~ = 0( Jn), and there do not exist strongly 'mean-universal codes with Rn = O(n113 ) .

Now let X be any finite set, and consider the models P such that if x = 1ivw,

Iv I~ l , then P(x) :::; kP(u)P(w), where I and k are fixed.

Theorem 2. For the above mentioned class, there exist strongly max-universal codes with R~ = 0(1/ log n).

Symmetric Functions in AC0 [2]

Ca.rsten Damm, Universitat Trier

Katja Lenz, Universitat Frankfurt

We consider symmetric Boolean functions computable by AC0 [2]-circuits - poly­nomial size, constant depth circuits with AND-, OR-, and PARITY-gat.es. If the depth is restricted to 2 or if PARITY-gates are disallowed, characterizations of these functions are already known (see [1] and [2]). Also for related models similar characterizations are known ( see [3]). All these characterizations exhibit one interesting phenomenon: if the depth of the circuits is restricted to be 2, the symmetric functions computable by this type of circuits are all those that have a constant bound on a certain complexity measure. If arbitrary constant depth is allowed, the functions are characterized by having a polylogarithmic bound on that same measure. The question arises if that magic polylog bound holds also in the case of AC0 [2]. In the depth-2-case the properties of a certain transformation (the well known Reed-Muller-transformation) played a central role. Under the assumption that these properties continue to hold in the general constant depth

11

Page 12: Descriptional Complexity Descriptional Complexity

case, we can give a complete characterization also for AC&#39;°[2]: any symmetricBoolean function in AC°[2] is �in essential� 2�-periodic with 2� = logo�) n - thuscompleting the above drawn picture of constant and polylogarithmic bounds.

[1] C. Damm, The complexity of symmetric functions in parity normal forms,Mathematical Foundations of Computer Science, Vol. 452 of Lecture Notes onComputer Sciences, Springer 1990, 232-238

[2] R. Fagin, M.M. Klawe, N.J. Pippenger, L. Stockmeyer, Bounded depth, poly-nomial size circuits for symmetric functions, Theoretical Computer Science 36,1985, 239-250

[3] Z.L. Zhang, D.A. Mix Barrington, J. Tarui, Computing symmetric functionswith AND/OR circuits and a single MAJORITY gate, in: Symposium on Theo-retical Aspects of Computing, 1993.

Descriptional Complexity of Grammar Systems

Jürgen Dassow, Technische Universität Magdeburg

Motivated by the blackboard architectures of Arti�cial Intelligence we introducegrammar systems which consist of some sets of productions and rewrite a commonsentential form. The investigations concern the following measures of description:

o number of components,

0 number of productions in a component,

0 number of active symbols in a component,

0 degree of rxondeterminism.

We present the hierarchies which are obtained by the restriction of one or two ofthese parameters.

Moreover, we show that descriptions of context-free languages by grammar sys-tems are more succinct than descriptions by context-free grausmars,

12

case, we can give a complete characterization also for AC0 [2]: any symmetric Boolean function in AC0 [2] is "in essential,, 21-periodic with 21 = log0 <1

) n - thus completing the above drawn picture of constant and polylogarithmic bounds.

[1] C. Damm, The complexity of symmetric functions in parity normal forms, Mat hematical Foundations of Computer Science, Vol. 452 of Lecture Notes on Computer Sciences , Springer 1990, 232-238

[2] R. Fagin, M .M . Klawe, N.J. Pippenger , L. Stockmeyer, Bounded depth, poly­nomial size circuits f or symmetric functions, Theoretical Computer Science 36, 1985, 239-250

[3] Z.L. Zhang, D.A. Mix Barrington, J. Tarui, Computing symmetric functions with AND/ OR circuits and a single MAJORIT Y gate, in: Symposium on Theo­retical Aspects of Computing, 1993.

Descriptional Complexity of Grammar Systems

Jurgen Dassow, Technische Universitat Magdeburg

Motivated by the blackboard architectures of Artificial Intelligence we introduce grammar systems which consist of some sets of productions and rewrite a common sentential form. The investigations concern the following measures of description:

• number of components,

• number of productions in a component ,

• number of active symbols in a component,

• degree of nondeterminism.

We present the hierarchies which are obtained by the restriction of on,- or two of these paramet ers.

Moreover, we show that descriptions of contcxt-fre<:- language!, by grammar sys­tems are more succinct than descriptions by context-free g.ra.11,mars .

12

Page 13: Descriptional Complexity Descriptional Complexity

Stochastic Complexity of Orthonormal Function Systemsand Applications in Image Analysis

Joachim Dengler, Vision Systems

The concept of stochastic complexity eliminates all possible redundancy of acoding system, thus approximating closely the Kolmogorov complexity.

One practical problem of stochastic complexity is the fact that it cannot becalculated analytically in closed form for most model classes. It is shown inthis presentation that for the regression problem with orthonormal regressorvariables and the traditional conjugate priors, the stochastic complexity canindeed be determined in closed form. The result is a very simple formula, wherethe stochastic complexity only depends on the number of parameters and theratio of the explained to the total variance.

The system of the eigenfunctions of the 2D quantum mechanical harmonicaloscillator are shown to be extremely useful for early vision, not only due to theirvariance properties with respect to scaling and rotation, but also due to theirbiological relevance.

As an example a high level segmentation problem of �ngerprint images is pre-sented. The intuitive understanding of good quality is translated into the localconcept of a ID wave with local support. By comparing the stochastic complexityof an oriented model with a general model the quality of a �ngerprint quanti�edand the �good� areas are labelled reliably.

The Query Complexity of Learning DFA

Josep Diaz, Universidad Politécnica de Cataluna

(Joint work with J. L. Balcézar, R. Gavalda, O. Watanabe)

It is known that the class of deterministic �nite automata is polynomial learnableby using membership and equivalence queries. We investigate the query com-plexity of learning deterministic �nite automata (number of membership andequivalence queries made during the process of learning). We extend a knownlower bound on membership queries to the case of randomized learning algo-rithms, and prove lower bounds on the number of alternations between member-ship and equivalence queries. We also show that .1 trade-off exists, allowing us toreduce the number of eqnivalen-.:e queric-s at the price of increas ng the numberof membe arship qzvzries.

)3

Stochastic Complexity of Orthonormal Function Systems and Applications in Image Analysis

Joachim Dengler, Vision Systems

The concept of stochastic complexity eliminates all possible redundancy of a coding system, thus approximating closely the Kolmogorov complexity.

One practical problem of stochastic complexity is the fact that it cannot be calculated analytically in closed form for most model classes. It is shown in this presentation that for the regression problem with orthonormal regressor variables and the traditional conjugate priors, the stochastic complexity can indeed be determined in closed form. The result is a. very simple formula, where the stochastic con,plexity only depends on the number of parameters and the ratio of the explained to the total variance.

The system of the eigenfunctions of the 2D quantum mechanical harmonica.I oscillator are shown to be extremely useful for early vision, not only due to their variance properties with respect to sea.ling and rotation, but also due to t heir biological relevance.

As an example a high level segmentation problem of fingerprint images is pre­sented. The intuitive understanding of good quality is t ranslated into the local concept of a ID wave with local support . By comparing the stochastic complexity of an oriented model with a general model the quality of a fingerprint quantified and the "good" areas are labelled reliably.

The Query Complexity of Learning DFA

J osep Diaz, U niversidad Politecnica de Cataluna

(Joint work with J. L. Balcazar, R. Gavalda, 0 . Watanabe)

It is known that the class of deterministic finite automata is polynomial learn able by using membership and equivalence queries. We investigate the query com­plexity of learning deterministic finite automata (number of memb.-:rship and equivalenre queries made during the process of learning). We extend a known lower bound on membership queries to the case of randomized learning algo­rithms, and prove lower bounds 0 11 the number of alternations between member­ship and equivalence qu~ries. W,· 1lso show that a ~rade-off exists, :-tllowing us to uduce th' · number of e,111ivalen,:e querit·s at the price of increas ng the number c.f memb, rship q,-1,~ries.

)3

Page 14: Descriptional Complexity Descriptional Complexity

The Boltzmann Entropy and Randomness Test

Peter Gäcs, Boston University

Description complexity is known to be related to Shannon entropy. The relationto thermodynamical entropy is less understood. Here we introduce a quantitycalled algorithmic entropy which is de�ned via payoff functions, and is close tothe negative of Martin-L6f�s test. The approximations to this quantity have theform lC(w") + log, [1.(P�n), where IC is the pre�x description complexity, w is anelement of the �phase space�, w" are the first n bits of a binary description of w, p.is the phase space volume measure, and 1],» is the set of those elements with �rstbits w". This quantity is interpreted as the desirable correction of Boltzman�sentropy, related to a similar expression introduced by Zurek. Its properties willbe discussed, and several relations among the abovementioned quantities will beestablished.

Stochastic Complexity and Optimal Misspeci�cation

Läszlö Gerencsér, Computer and Automation Institute, HungarianAcademy of Sciences

We present a problem in signal processing, which can be solved using ideasof the theory of stochastic complexity. The problem is the so-called changepoint detection, where the change in the dynamics of a stochastic process hasto be detected. Stochastic complexity in its predictive form due to Rissanenis generalized to allow for the description of time-varying processes. The ideais to forget past information at an exponential rate. This way the estimationalgorithm also becomes more robust. Moreover misspeci�cation i.e. the use ofsimple models which may not contain the true model becomes desirable, andthus the problem of increasing model complexity is avoided. In the contextof change point detection the encoding procedure is tailored to the assumedmoment of change. Thus the number of model classes is equal to the number ofobservations, but the associated codelengths can be reduced to the partial sumsof a signal sequence. An extensive computer simulation has shown the viabiltiyof the method. In addition a number of beautiful mathematical problems canbe formulated in connection with these methods, some of them which have beenanswered.

14

The Boltzmann Entropy and Randomness Test

Peter Gacs, Boston University

Description complexity is known to be related to Shannon entropy. The relation to thermodynamical entropy is less understood. Here we introduce a quantity called algorithmic entropy which is defined via payoff functions, and is close to the negative of Martin-Lof's test . The approximations to this quantity have the form X:(wn) + log2 µ,(f.., .. ), where IC is the prefix description complexity, w is an element of the "phase space", wn are the first n bits of a binary description of w , µ is the phase space volume measure, and r .., .. is the set of those elements with first bits wn. This quantity is interpreted as the desirable correction of Boltzman 's entropy, related to a similar expression introduced by Zurek. Its properties will be discussed, and several relations among the abovementioned quantities will be established.

Stochastic Complexity and Optimal Misspecification

Laszlo Gerencser, Computer and Automation Institute, Hungarian Academy of Sciences

We present a problem in signal processing, which can be solved using ideas of the theory of stochastic complexity. The problem is the so-called change point detection, where the change in the dynamics of a stochastic process has to be detected. Stochastic complexity in its predictive form due to Rissanen is generalized to allow for the description of time-varying processes. The idea is to forget past information at an exponential rate. This way the estimation algorithm also becomes more robust . Moreover misspecification i.e. the use of simple models which may not contain the true model becomes desirable, and thus the problem of increasing model complexity is avoided. In the context of change point detection the encoding procedure is tailored to the assumed moment of change. Thus the number of model classes is equal to the number of observations, but the associated codelengths can be reduced to the partial sums of a signal sequence. An extensive computer simulation has shown the viabiltiy of the method. In addition a number of beautiful mathematical problems can be formulated in connection with these methods, some of them which have been answered.

14

Page 15: Descriptional Complexity Descriptional Complexity

Some Observations related to Kolmogorov Complexity

Günter Hotz� Universität Saarbrücken

The lecture discusses some critical statements to the state of the theory.

o The Kolmogorov complexity of finite theory is �nite.

This theorem is not at all consistent with our feeling: K.C. cannot be ameasure of information, for it�s absurde that a theory has only a �niteinformation content or the concept of a theory in logic is not what weexpect to be a theory.

o Resource bounded K.C. could be an approximation to information under-

standing.

o Learning from ergodic sources seems to be hard; teaching seams to be

extremely non-ergodic.

o Describing Galileis experiment by minimal programs never will lead to hislaws. We miss in such constructconstructions the very essential concept of�ideal elements�; in Galileis case the �Vakuum�.

o Nature description is essentially based on the language of differential cal-culus.

o Prediction of events in Natural Science is a. two stage process:

1. Understanding by use of ideal elements

2. Prediction by approximation of ideal behaviour.

0 Real numbers and the Ackermann function don�t differ in reality. One

should extend the theory of computation such that operations as .r ::lim,,_,°,, an and operations on power series are included.

Historical remark: The concept of collectives from von Mises appears 1:" yearsbefore the publication of von Mises in a paper of Helm �Bd.1 N aturphilosophie1902, 364-381�.

Resource bounded K.C. were first studied in Schnorr: �Zuféilligkeit und Wahr�scheinlichkeit�, Lect.Not.Math. Vol.218 (1971).

15

Some Observations related to Kolmogorov Complexity

Gunter Hotz, U niversitat Saarbriicken

The lecture discusses some crit ical statements to the state of the theory.

• The Kolmogorov complexity of finite theory is finite.

This theorem is not at all coi:i,sistent with our feeling: K.C. cannot be a measure of information, for it's absurde that a theory has only a finite information content or the concept of a theory in logic is not what we expect to be a theory.

• Resource bounded K.C. could be an approximation to information under­standing.

• Learning from ergodic sources seems t o be hard; t eaching seams to be extremely non-ergodic.

• Describing Galileis experiment by minimal programs never will lea<l to hi s laws. We miss in such constructconstructions the very essential concept of "ideal elements"; in Galileis case the "vakuum" .

• Nature description is essentially based on the language of different ial cal­culus.

• Prediction of events in Natural Science is a two stage proct'ss:

1. Understanding by use of ideal elements

2. Prediction by approximation of ideal behaviour.

• Real numbers and the Ackermann function don't differ m reality. One should extend the theory of computation such that operations as .r .= limn_.00 an and operations on power series are included.

Historical remark: The concept of collectives from von Mises appear& J i years before the publication of von Mises in a paper of Helm "Bd. l Naturphilosophie 1902, 364-381".

Resource bounded K.C. were first studied in Schnorr: "Zufalligkeit und Wahr­scheinlichkeit", Leet.Not.Math. Vol.218 (1971).

15

Page 16: Descriptional Complexity Descriptional Complexity

k-One Way Heads Cannot Do String Matching � a LowerBound by Kolmogorov Complexity

Tao Jiang, McMaster University, Hamilton, Ontario, Canada

(Joint work with Ming Li)

We settle a conjecture raised by Galil and Seiferas twelve years ago: k-head oneway DFA cannot do string matching, for any k. The proof uses Kolmogorovcomplexity and the concept of incompressibility.

Randomness of Number Representations

Helmut Jiirgensen, University of Western Ontario, London / Ontario

We show that randomness of the natural positional presentation of a real numberis invariant under base transformations.

Average-case Complexity of Heapsort

Ming Li, University of Waterloo

Heapsort is a commonly used sorting algorithm with both worst�case and average-case running time O(n log n), no extra space (in situ-sorting), and comparisonbased. Due to J. Williams (1965), there is an improved version by R. Floyd(1968?) The problem of determinig the lower bound on the average-case runningtime is mentioned by D.E. Knuth (1973, in Sorting and Searching), and was leftopen until the solution by R. Schaffer (Ph.D. Thesis 1992, see also a paperof Schaffer and Sedgewick). Ian Munro subsequently gave a short and simpleproof using Kolmogorov complexity in the incompressibility method. The lowerbound (exact) on the number of comparisons is nlogn (Floyd�s) and nnlogn(William�s).

16

k-One Way Heads Cannot Do String Matching - a Lower Bound by Kolmogorov Complexity

Tao Jiang, McMaster University, Hamilton, Ontario, Canada

(Joint work with Ming Li)

We settle a conjecture raised by Gahl and Seiferas twelve years ago: k-head one way DFA cannot do string matching, for any k. The proof uses Kolmogorov complexity and the concept of incompressibility.

Randomness of Number Representations

Helmut Jurgensen, University of Western Ontario, London/Ontario

We show that randomness of the natural positional presentation of a real number is invariant under base transformations.

Average-case Complexity of Heapsort

Ming Li, University of Waterloo

Heapsort is a commonly used sorting algorithm with both worst-ca.se and average­ca.se running time 0( n log n ), no extra space ( in situ-sorting), and comparison based. Due to J. Williams (1965), there is an improved version by R. Floyd (1968?). The problem of determinig the lower bound on the average-case running time is mentioned by D.E. Knuth (1973, in Sorting and Searching), and was left open until the solution by R. Schaffer (Ph.D. Thesis 1992, s~e also a paper of Schaffer and Sedgewick). Ian Munro subsequently gave a short and simple proof using Kolmogorov complexity in the incompressibility method. The lower bound (exact) on the number of comparisons is nlog n (Floyd's) and Knlogn (William's).

16

Page 17: Descriptional Complexity Descriptional Complexity

Separating exponentially ambiguous NFA from polyno-mially ambiguous NFA

Hing Leung, New Mexico State University

We prove that exponentially ambiguous NFA is �separated� from polynomiallyambiguous NFA. Speci�cally, we show that there exists a family of N FAs {An |n Z 1} over a two-letter alphabet such that, for any positive integer n, An isexponentially ambiguous and has n states, whereas the smallest equivalent DFAhas 2" states and any smallest equivalent polynomially ambiguous NFA has 2" -1states.

The Utility and Depth of Information

Jack H. Lutz, Dept. of Computer Science, Iowa State University

(Joint work with David W. Juedes and James-I.<Lathrop)

A DNA sequence and the UNIX operation system contain far less informationthan random coin-toss sequences of comparable length. Nevertheless, DNA andUNIX appear to be more useful than random coin-toss�sequ�en�ces.-I Clearly, a dataobject derives its usefulness not only from the amount of information it contains,but also from the manner in which this information isiioriganized. Recently,Bennett has defined computational depth (also called logical depth�) to quantifythe amount of �computational work� that has been �added� to the informationin an object and �stored� in its organization. B

This talk gives an overview of computational depth and its funda1nental_prop~erties. Two new results on the computational depth of in�nite binary sequencesare also presented. Roughly speaking, the first says that a sequence must bedeep in order to be useful. The second says that, in the sense of Baire category,almost every in�nite binary sequence is somewhat deep.

17

Separating exponentially ambiguous NFA from polyno­mially ambiguous NFA

Hing Leung, New Mexico State University

We prove that exponentially ambiguous NFA is "separated" from polynomially ambiguous NFA. Specifically, we show that there exists a family of NFAs {An I n ~ 1} over a two-letter alphabet such that, for any positive integer n , An is exponentially ambiguous and has n states, whereas the smallest equivalent DFA ha.s 2" states and any smallest equivalent polynomially ambiguous NFA has 2" - 1 states.

The Utility and Depth of Information

Jack H. Lutz, Dept. of Computer Science, Iowa State Univer sity

(Joint work with David W. Juedes and Ja~es-1. Lathrop)

A DNA sequence and the UNIX operation systeqi contain fa.r less informat;on than random coin-toss sequences of comparable ltmgth. Nevertheless, DNA and UNIX appear to be more useful than random coin-tos~ 'sequences. Clearly, a data

I

object derives its usefulness not only from the amount of intornrati6n it contains, but also from the manner in which this information is"1organized. Recently, Bennett has defined computational depth (also called logical depth) to quant ify the a.mount of "computational work" that has been "added" to the information in an object and "stored" in its organization.

This talk gives an overview of computational depth and its fundamental ,prop­erties. Two new results on the computational depth of infinite binary sequences are also presented. Roughly speaking, the first says that a sequence must be deep in order to be useful. The second says that , in the sense of Baire category, almost every infinite binary sequence is somewhat deep.

17

Page 18: Descriptional Complexity Descriptional Complexity

Frontiers of Feasible Boolean Manipulation

Christoph Meinel, Universität Trier

(Joint work with J. Gergov)

A central issue in the solution of many computer aided design problems is to�nd concise representations for circuit design and their functional speci�cation.Recently, a restricted type of branching programs (so called OBDDs) proved tobe extremely useful for representing Boolean function for various CAD applica-tions. Unfortunately, many circuits of practical interest provable require OBDD~representation of exponential size. We systemmatically study the question upto what extend more concise branching program-descriptions can be successfullyused in symbolic Boolean manipulation, too. We prove in very general settings

0 the frontier of efficient Boolean manipulation on the basis of BP-represen-tations are FBDDs (i.e. real-once-only braching programs),

o the frontier of efficient probabilistic Boolean manipulation with BP-baseddata structures are MOD-2-FBDDS.

Instance complexity

Pekka Orponen, University of Helsinki

(Joint work with Ker-I Ko, Uwe Schöning, Osamu Watanabe)

We introduce a measure for the computational complexity of single instances of adecision problem and survey some of its properties. The instance complexityof a string a: with respect to a set A and a time bound t, ict(:c : A), is de�ned asthe size of the smallest special case program for A that runs in time t, decidesa: correctly, and makes no mistakes on other strings (�dont�t know�-answers arepermitted). It can be shown that a set A is in class �P if and only if there exista polynomial t and a constant c such that ic�(:c : A) g c for all 2:; on the otherhand if A is N77-hard and &#39;P 75 ./\/"P, then for all polynomials t and constants c,ic�(:c : A) > c log, | a: | for in�nitely many m.

Observing that IC�(:c), the time t bounded Kolmogorov complexity of m, is roughlyan upper bound on ict(2: 2 A), we propose the following �instance complexity con-jecture�: for all appropriate time bounds t, if a set is not in the class DTIME(t)�there then exists a constant c and in�nitely many 13 such that ic�(:c : A) >

18

Frontiers of Feasible Boolean Manipulation

Christoph Meinel, Universitiit Trier

(Joint work with J. Gergov)

A central issue in the solution of many computer aided design problems is to find concise representations for circuit design and their functional specification. Recently, a restricted type of branching programs (so called OBDDs) proved to be extremely useful for representing Boolean function for various CAD applica­tions. Unfortunately, many circuits of practical interest provable require OBDD­representation of exponential size. We systemmatically study the question up to what extend more concise branching program-descriptions can be successfully used in symbolic Boolean manipulation, too. We prove in very general settings

• the frontier of efficient Boolean manipulation on the basis of BP-represen­tations are FBDDs (i.e. real-once-only braching programs),

• the frontier of efficient probabilistic Boolean manipulation with BP-based data structures are MOD-2-FBDDs.

Instance complexity

Pekka Orponen, University of Helsinki

(Joint work with Ker-I Ko, Uwe Schoning, Osamu Watanabe)

We introduce a measure for the computational complexity of single instances of a decision problem and survey some of its properties. The instance complexity of a string z with respect to a set A and a time bound t , ic\x : A), is defined as the size of the smallest special case program for A that runs in time t, decides x correctly, and makes no mistakes on other strings ( "dont't know"-answers are permitted). It can be shown that a set A is in class P if and only if there exist a polynomial t and a constant c such that ict( x : A) ::::; c for all x ; on the other hand if A is NP-hard and P # NP, then for all polynomials t and constants c, ic1(x: A) > c log2 I z I for infinitely many x.

Observing that Kt(x ), the time t bounded Kolmogorov complexity of x , is roughly an upper hound on ict(x : A), we propose the following "instance complexity con­jecture" : for all appropriate time bounds t, if a set is not in the class DTIME(t), there then exists a constant c and infinitely many z such that ict( z : A) ~

18

Page 19: Descriptional Complexity Descriptional Complexity

lC�(a:) � c. Intuitively, the conjecture claims that if a problem is not decidable intime t, there will be in�nitely many problem instances for which the essentiallybest time t solution method is simple table look-up.

The following partial results on this conjecture have been obtained:

o if t(n) Z n is a time constructible function and A is a recursive set notin DTIME(t(n)), there then exist a constant c and in�nitely many a: suchthat ic�(:c : A) 2 lC(:z:) � c, where lC(:v) is the time-unbounded version ofKolmogorov complexity.

o if A is A/"P-hard and EXPTIME 79 NEXPTIME, then for any polynomialt there exist a polynomial t� and a constant c such that for in�nitely many2:, ic�(:c : A) Z lC"(:v) - c.

If the set A is EXPTIME-hard, the latter result holds even without the assumption that EXPTIME 7E NEXPTIME.

Application of Descriptional Complexity to Computer Vi-sion and Machine Learning

Edwin Pednault, AT & T Bell Laboratories, Holmdel, New Jersey,USA

This talk presents an overview of descriptional complexity as applied to com-puter vision and machine learning. The relevance of descriptional complexity tothese areas is illustrated by means of several psychophysical phenomena in hu-man vision. The phenomena are readily explained by assuming that the visualsystem operates by �nding a minimal encoding of retinal images is some sort.of visual language. Using the examples as motivation, the information theoretic:concepts of Shannon coding and Shannon entropy are introduced, together withthe descriptional complexity concepts of Kolmogorov complexity, the MinimumDescription-Length (MDL) principle, stochastic complexity, and predictive MDL.Examples drawn from the speaker�s work on piecewise-polynomial curve �ttingand the work of several other researchers are used to illustrate the applicationof these descriptional complexity measures to problems in computer vision andmachine learning.

19

Kt( x) - c. Intuitively, the conjecture claims that if a problem is not decidable in time t, there will be infinitely many problem instances for which the essentially best time t solution method is simple table look-up .

The following partial results on this conjecture have been obtained:

• if t( n) 2 n is a time constructible function and A is a recursive set not in DTIME(t(n)), there then exist a constant c and infinitely many x such that ict(x : A) 2 K(x) - c, where K(x) is the time-unbounded version of Kolmogorov complexity.

• if A is .NP-hard and EXPTIME ::j; NEXPTIME, then for any polynomial t there exist a polynomial t' and a constant c such that for infinitely many x, ic1(x: A) 2 ,ct' (x) - c.

If the set A is EXPTIME-hard, the lat ter result holds even wit hout the assump­tion that EXPTIME ::j; NEXPTIME.

Application of Descriptional Complexity to Con1puter Vi­sion and Machine Learning

Edwin Pednault, AT & T Bell Laboratories, Holmdel, New Jersey, USA

This talk presents an overview of descriptional complexity a~ applied to con,­puter vision and machine learning. The relevance of descriptional complexity to these areas is illustrated by means of several psychophysical phenomena in hu­man vision. The phenomena are readily explained by assuming t hat ihe visual system operates by finding a minimal encoding of retinal images is some '">rt of visual language. Using the examples as motivation, the information theorf"t1, concepts of Shannon coding and Shannon entropy are introduced , togethn wit h the descriptional complexity concepts of Kolmogorov complexity, the Minimum Description-Length (MDL) principle, stochastic complexity, and predictive MDL. Examples drawn from the speaker 's work on piecewise-polynomial , urve fitting and the work of several other researchers are used to illu~trate t.he application of these descriptional complexity measures to problems in computer vision and machine learning.

19

Page 20: Descriptional Complexity Descriptional Complexity

An Algebraic Model for Combinatorial Problems

Richard E. Stearns, Dept. of Computer Science, University at Albany

(Joint work with H. B. Hunt III)

A new algebraic model, called the �generalized sati�ability problem� or �GSP�model, is introduced for representing and solving combinatorial problems. TheGSP model is an alternative to the common method in the literature of rep-resenting such problems as language recognition problems. In the GSP model,a problem instance is represented by a set of variables together with a set ofterms, and the computational objective is to find a certain sum of products ofterms over a commutative semiring. Each Boolean satis�ability problem, eachnonserial optimization problem, many {0, 1} linear programming problems, andmany graph problems are directly representable as GSPs.

Important properties of the GSP model include the following:

1. By varying the semiring, a number of complete problems in the complexityclasses N79, Co./\/"P, DP, OPT-�P, MAX SNP, MAX II1, PSPACE, and#PSPACE are directly representable as GSPs.

2. In the GSP model, one can naturally discuss the structure of individualproblem instances. The structure of a GSP is displayed in a �structuretree�. The smaller the �weighted depth� or �channelwidth� of the structurefor a GSP instance, the faster the instance can be solved by any one ofseveral generic algorithms.

3. The GSP model extends easily so as to apply to hierarchically-speci�edproblems and enables solutions to instances of such problems to be founddirectly from the speci�cation rather than from the (often exponentially)larger speci�ed object.

On a conjecture by Orponen + 3

John Tromp, CWI Amsterdam

This short talk presented two results relating the instance complexity of elementsof non�recursive sets to their (simple) Kolmogorov complexity. The �rst, forany non recursive set A, says that for in�nitely many 2:, ic(:c : A) is at leastlog K (z) � 0(1). The second, for any enumerable non recursive set A, says thatfor in�nitely many z, ic(:c : A) is at least K(2:) � &#39;u.b(a:) � 0(1), where ub(a:) isthe least number of bits needed to describe an upper bound on 2:.

20

An Algebraic Model for Combinatorial Problems

Richard E. Stearns, Dept. of Computer Science, University at Albany

(Joint work with H. B. Hunt III)

A new algebraic model, called the "generalized satifiability problem" or "GSP " model, is introduced for representing and solving combinatorial problems. The GSP model is an alternative to the common method in the literature of rep­resenting such problems as language recognition problems. In the GSP model, a problem instance is represented by a set of variables together with a set of terms, and the computational objective is to find a certain sum of products of terms over a commutative semiring. Each Boolean satisfiability problem, each nonserial optimization problem, many {0, 1} linear programming problems, and many graph problems are directly representable as GSPs.

Important properties of the GSP model include the following:

1. By varying the semiring, a number of complete problems in the complexity classes NP, Co.NP, DP, OPT-P, MAX SNP, MAX Il1, PSPACE, and #PSPACE are directly representable as GSPs.

2. In the GSP model, one can naturally discuss the structure of individual problem instances. The structure of a GSP is displayed in a "structure t ree" . The smaller the '1weighted depth" or uchannelwidth" of the structure for a GSP instance, the faster the instance can be solved by any one of several generic algorithms.

3. The GSP model extends easily so as to apply to hierarchically-specified problems and enables solutions to instances of such problems to be found directly from the specification rather than from the ( often exponentially) larger specified object .

On a conjecture by Orponen + 3

John Tromp, CWI Amsterdam

This short talk presented two results relating the instance complexity of elements of non-recursive sets to their {simple) Kolmogorov complexity. The first, for any non recursive set A , says that for infinitely many z , ic{z : A) is at least log K ( z) - 0( 1). The second, for any enumerable non recursive set A, says that for innnitely many z, ic(z: A) is at least K(z) - ub(z) - 0(1), where ub(z) is the least numher of bits needed to describe an upper bound on z.

20

Page 21: Descriptional Complexity Descriptional Complexity

An Invitation for Retrospectives and Perspectives

Vladimir A. Uspensky, Moscow State Lomonosov University

The underlying ideas of the Kolmogorov complexity are simple and natural.There are objects and there are descriptions of those objects. Kolmogorov com-plexity of an object is the size of the best description. Of course, some descriptionmode, preferably the best one, should be chosen in advance. (A descpiption modeis a subset E of the cartesian product X >< Y, where X is the set of all descrip-tions and Y is the set of all objects; (:c,y) E E means that 23&#39; is a description ofy with respect to 0���

Therefore, the following entities are to be �xed:

1. A class of admissible description modes.

2. A partial ordering �to be better� on that class.

&#39; 3. A partial ordering �to be better� on X.

4. A size function, which is a total function from X to IN.

Historically, any ordering on X is determined by the size fuction: for descriptions,to be better means to have smaller size. So a best description is the minimal-size description, and Kolmogorov complexity of an object is the minimal size ofits description. Also historically, for modes, to be better means to give smallercomplexities: E is better than F if for some C not depending on y and for all y

C0mP1E(3l) 5 C°mP1F(3l) + 0-

(Here is another possible ordering: E is better than F if there is a polynomialP such that tE(y) 5 P(t,-.~(y)), where tG(y) is the running time of producing yfrom its best description.)

The scheme just exposed gives �ve well-known versions of Kolmogorov complex-ity:

o simple entropy K S (A.N. Kolmogorov, 1965);

o decision entropy K D (D.W. Loveland, 1969);

o a priori entropy KA (L.A. Levin, 1970);

o monotonic entropy KM (L.A. Levin, 1973);

o pre�x entropy KP (L.A. Levin, 1974).

21

An Invitation for Retrospectives and Perspe.ctives

Vladimir A . Uspensky, Moscow State Lomonosov University

The underlying ideas of the Kolmogorov complexity are simple and natural. There are objects and there are descriptions of those objects. Kolmogorov com­plexity of an object is the size of the best description. Of course, some description mode, preferably the best one, should be chosen in advance. {A descpiption mode is a subset E of the cartesian product X x Y, where X is the set of all descrip­tions and Y is the set of all objects; (:z:, y) E E means that :c· is a description of y with respect to E.)

Therefore, the following entities are to be fixed:

1. A class of admissible description modes.

2. A partial ordering "to be better" on that class.

· 3. A partial ordering "to be better" on X .

4. A size function, which is a total function from X to IN.

Historically, any ordering on X is determined by the size fuction: for descriptions, to be better means to have smaller size. So a best description is the minimal­size description, and Kolmogorov complexity of an object is the minimal si;,;e of its description. Also historically, for modes, to be better means to give smaller complexities: E is better than F if for some C not depending on y and for all y

(Here is another possible ordering: E is better than F if there is a polynomial P such that tE(Y) :::; P(tF(y)), where tG(Y) is the running time of producing y from its best description.)

The scheme just exposed gives five well-known versions of Kolmogorov complex­ity:

• simple entropy KS (A.N. Kolmogorov, 1965);

• decision entropy K D (D.W. Loveland, 1969);

• a priori entropy KA (L.A. Levin, 1970);

• monotonic entropy KM (L.A. Levin, 1973);

• prefix entropy K P (L.A. Levin, 1974).

21

Page 22: Descriptional Complexity Descriptional Complexity

Remark 1. A best mode is usually called �an optimal mode�. And accordingto the Moscow tradition of Kolmogorov scienti�c school, the word �entropy� isused instead of �Kolmogorov complexity�. The term �complexity� is still usedwhen an arbitrary, not necessarily optimal, description mode is considered. Sothe entropy is the complexity with respect to an optimal description mode.

Remark 2. The �rst scienti�c works by Kolmogorov were not in mathematicsbut in history. Those student papers are to be published in Moscow as a separatebook (probably, this year). Some ideas of Kolmogorov complexity can be traceddown to those papers. (Namely, in his study of �scal policies in the medievalstate of Novgorod, Kolmogorov_heavily leaned on the idea that the regulationsshould be simple, not complex.)

An Axiomatisation of Randomness

Michiel van Lambalgen, University of Amsterdam

The talk described an attempt to axiomatize a notion of randomness by abstract-ing an algebraic structure that is implicit in many existing recursion theoreticdefinitions of randomness, such as those of Schnorr, Martin-Lof, and Gaifmanand Snir.

The motivation for this attempt is partly philosophical: a desire to describedirectly in a simple language intuitions about randomness, unencumbered byformal details of computability theories. Also, there are other (informal) ap-plications of randomness in mathematics, notably set theory and model theory,which cannot be formalized by the usual recursion theoretic definitions: forcingis a case in point.

The fundamental relation used in the axiomatisation is: a �nite sequence oforacles y1�. . . �y� does not contain information about the random sequence z.The axioms proposed for this relation are a combination of axioms for matroids(including the Steinitz-Maclane exchange principle), and Kolmogorov�s 0-1 law.

The axioms can be shown to hold for the aforementioned de�nition of randomness

by considering suitable notions of oracle computation. To show the non-trivialityof the axioms, we add them to Zermelo-Fraenkel set theory to obtain quick dis-proofs of the axiom of choice and continuum hypothesis (in its aleph free version).On the other hand,� the axioms are consistent with weak choice principles.

{I} Michiel van Lambalgen, Independence, Randomness and the Axiom of Choice,Journal of Symbolic Logic Vol.57, No.4, 1992

22

Remark 1. A best mode is usually called "an optimal mode" . And according to the Moscow tradition of Kolmogorov scientific school, the word "entropy" is used instead of "Kolmogorov complexity". The term "complerity" is still used when an arbitrary, not necessarily optimal, description mode is considered. So the entropy is the complexity with respect to an optimal description mode.

Remark 2. The first scientific works by Kolmogorov were not in mathematics but in history. Those student papers are to be published in Moscow as a separate book (probably, this year) . Some ideas of Kolmogorov complexity can be traced down to those papers. (Namely, in his study of fiscal policies in the medieval state of Novgorod, Kolmogorov heavily leaned on the idea that the regulations should be simple, not complex.)

An Axiomatisation of Randomness

Michie} van Lambalgen, University of Amsterdam

The talk described an attempt to axiomatize a notion of randomnt!ss by abstract­ing an algebraic structure that is implicit in many existing recursion theoretic definitions of randomness, such as those of Schnorr, Martin-Lof, and Gaifman and Snir.

The motivation for this attempt is partly philosophical: a desire to describe directly in a simple language intuitions about randomness, unencumbered by formal details of computability theories. Also, there are other (informal) ap­plications of randomness in mathematics, notably set theory and model theory, wh.ich cannot be formalized by the usual recursion theoretic definitions: forcing is a case in point.

The fundamental relation used in the axiomatisation is: a finite sequence of oracles y1 , ... , Yn does not contain information about the random sequence :z:. The axioms proposed for this relation are a combination of axioms for matroids (including the Steinitz-Maclane exchange principle), and Kolmogorov's 0-1 law.

The axioms can be shown to hold for the aforementioned definition of randomness by c.onsidering suitable notions of oracle computation. To show the non-triviality of the axioms, we add them to Zermelo-Fraenkel set theory to obtain quick dis­proofs of the axiom of choice and continuum hypothesis (in its aleph free version) . On the other hand',' the axioms are consistent with weak choice principles.

[1] Michie} van La.mbalgen, Independence, Randomnf.ss and the Axiom of Choice, Journal of Symbolit Logic \iol.57, No.4, 1992

22

Page 23: Descriptional Complexity Descriptional Complexity

Introduction to Kolmogorov Complexity and its Applica-tions

Paul Vitényi, CWI / University of Amsterdam

(Joint work with Ming Li and based on the book of the same title,Springer Verlag 1993)

We survey the basic notions of Kolmogorov complexity and its many applications.Among others:

o inductive reasoning and machine learning,

0 the incompressibility method in combinatorics,

o formal languages,

o average case (time) complexity,

» 0 machine models,

o algorithmic entropy and information distance,

and so on.

Thermodynamics of Computation and Information Dis-tance

Paul Vitényi, CWI / University of Amsterdam

(Joint work with Charles Bennett, Peter Gäcs, Ming Li, W. Zurek inSTOC93)

It is widely accepted that the only bit operations that necessarily dissipate energyin the form of heat (kTlog 2 Joule, where k is the Boltzmann constant and Tthe temperature in degree Kelvin) are those operations which are irreversible.This has consequences on the continuing miniaturization of electronic chips andaccompanying advance in computing power. It appears that new principles ofcomputing which dissipate no (an arbitrary low) energy need to be found within20 years. One road, perhaps the most promising one, is to turn to reversiblecomputing. We investigate several notions of absolute information distance:

o E1(:c�y) = length of shortest program to translate at to y and y to r (m anormal (irreversible) Turing machine

23

Introduction to Kolmogorov Complexity and its Applica­tions

Paul Vitanyi, CWI / University of Amsterdam

(Joint work with Ming Li and based on the book of the same title, Springer Verlag 1993)

We survey the basic notions of Kolmogorov complexity and its many applications. Among others:

• inductive reasoning and macbjne learning,

• the incompressibility method in combinatorics ,

• formal languages,

• average case (time) complexity,

• machine models,

• algorithmic entropy and information distance,

and so on.

Thermodynamics of Computation and Inforn1ation Dis­tance

Paul Vitanyi, CWI / University of Amsterdam

(Joint work with Charles Bennett, Peter Gacs, Ming Li, W. Zurek in STOC93)

It is widely accepted that the only bit operations that necessarily clissipate energy in the form of heat (kTlog 2 Joule, where k is the Boltzmann constant and T the temperature in degree Kelvin) are those operations which are irreversible Thls has consequences on the continuing mjniaturization of electronic chips and accompanying advance in computing power. It appears that new principles of computing which dissipate no ( an arbitrary low) energy need to be found within 20 years. One road, perhaps the most promising one, is to turn to reversible computing. We investigate several notions of absolute information distanrc:

• E1 ( z , y) = length of shortest program to translate :r. to y and _11 to ;r on a normal (irreversible) Turing machine

23

Page 24: Descriptional Complexity Descriptional Complexity

0 E2(2:, y) = the minimum �pattern� distance satisfying some normalizationcriteria and effectiveness.

o E3(:c. y) = length of shortest program to translate from a: to y and y to an bya reversible Turing machine. (In E1 and E3 the program works in a �cat~alytic� manner and is retained before, during and after the computation).It turns out that E1 = E; = E3 up to a log additive factor.

o E4(:c�y) = amount of irreversibility in translating a: to y and y to 1: byan otherwise reversible computation (minimal number of bits consumed atthe beginning and reversibility deleted at the end).

since Elm) = rnax(�C(=cly)JC(y|x)) and E40323/) = mug) + max) (up to alog additive term), we �nd E1(:c,y) S E4(:c,y) S 2E1(:c�y) up to a log additiveterm.

Economy of Description for Single-valued Transducers

Andreas Weber, Universität Frankfurt am Main

(Joint work with Reinhard Klemm, Pennsylvania State University)

In this talk questions of economy of description are investigated in connectionwith single-valued finite transducers. The following results are shown.

1. Any single-valued real-time transducer M with n states can be effectivelytransformed into an equivalent unambiguous real-time transducer havingat most 2" states.

2. Let M be a single-valued real-time transducer with n states and outputalphabet A which is equivalent to some deterministic real-time or subse-quential transducer M�. Then, M can be effectively transformed into suchan M� having at most 2" - max{2,#A}2"3&#39; + 1 states where I is a localstructural parameter of M.

3. For any single-valued transducer it is decidable in deterministic polyno-mial time whether or not it is equivalent to some deterministic real-time

transducer (to some subsequential transducer, respectively).

The results 1. and 2. can be extended to the case that M is not nccc-ssarily realtime. The upper bound in 1. is at most one state off the optimal upper bound.Any possible improvement of the upper bound in 2. is greater or equal than &#39;.1;&#39;".

24

• E2(x, y) = the minimum "pattern" distance satisfying some normalization criteria and effectivene'is.

• E3 ( x . y) = length of shortest program to translate from x to y and y to x by a reversible Turing machine. (In E 1 and E 3 the program works in a 'cat­alytic' manner and is retained before, during and after the computation). It turns out that E 1 = E 2 = E3 up to a log additive factor.

• E4(x , y) = amount of irreversibility in translating x to y and y to x by an otherwise reversible computation (minimal number of bits consumed at the beginning and reversibility deleted at the end).

Since E1(x ,y) = ma.x {K(xly), K(y l:z: )) and E4{:z:,y) = K(:z: ly) + K(y j:z: ) (up to a. log additive term), we find E1(x, y) ;:; E4(x, y) ;:; 2E1 (x, y) up to a log additive term.

Economy of Description for Single-valued Transducers

Andreas Weber, Universitiit Frankfurt am Main

{Joint work with Reinhard Klemm, Pennsylvania State University)

In this talk questions of economy of description are investigated in connection with single-valued finite transducers. The following results a.re shown.

1. Any single-valued real-time transducer M with n states can be effectively transformed into a.n equivalent unambiguous real-time transducer having at most 2n states.

2. Let M be a single-valued real-time transducer with n states and output alphabet A which is equivalent to some deterministic real-time or subse­quential transducer M'. Then, M can be effectively transformed into such an M' having a.t most 2n • ma.x{2, # A }2n

3l + 1 states where l is a local

structural para.meter of M.

3. For any single-valued transducer it is decidable in deterministic polyno­mial time whether or not it is equivalent to some deterministi c. real-time transducer (to some subsequentia.l transducer, respectively).

The results L. and 2. can be extended to the· case that M is not n<'Ct ssarily rP.al time. Th<' upper bound in 1. is at most one state off the opt imal 11r per bou ud . Any possible improvement of the upper bound in 2. is greater or equ ~l than '.!n.

24

Page 25: Descriptional Complexity Descriptional Complexity

Descriptional Complexity for Automata, Languages andGrammars

Detlef Wotschke, Universität Frankfurt

This talk is to be seen primarily as a survey and an introduction to some of thetalks to come.

In Automata, Languages and Grammars we are dealing with rather restrictedobjects which have a de�nite place in computer science as description or designmodels for applications.

Within these restricted and application-oriented models, the �absolute� mini-mal length descriptions are very often not desirable, for example in StructuredProgramming, Finite Automata, Context-Free Grammars, Pushdown Automata.Within these models one looks at organizational measures which seems to capturethe essential part of the information.

However, this organizational overhead should be kept to a minimum. So, we tryto measure and minimize this organizational overhead by measuring organiza-tional components.

We measure parameters such as

o Description length within the limited model,

o Number of states,

0 Number of productions,

0 Number of nonterminals,

o Number of states times number of stack-symbols.

For a given language (set) L, how does L�s descriptional complexity change withregard to the underlying descriptive model? Or more speci�cally:

Ilow short, how succinct, how simple or how economical can the descriptions ofautomata, grammars and programs be if we allow the usage of concepts such as

o arbitrary or limited nondeterminism,

o arbitrary or limited ambiguity,

o arbitrary or limited delay in error�recovery.

o arbitrary or limited lookahead,

o different notions of accepta.nc.e?

Descriptional Complexity for Automata, Languages and Grammars

Detlef Wotschke, Universitat Frankfurt

This talk is to be seen primarily as a survey and an introduction to some of the talks to come.

In Automata, Languages and Grammars we are dealing with rather restricted objects which have a definite place in computer science as description or design models for applications.

Within these restricted and application-oriented models, the ' absolute' mini­mal length descriptions are very often not desirable, for example in Structured Programming, Finite Automata, Context-Free Grammars, Pushdown Automata. Within these models one looks at organizational measures which seems to capture the essential pa.rt of the information.

However , this organizational overhead should be kept to a minimum. So, we try to measure and minimize this organizational overhead by measuring organiza­tional components.

We measure parameters such as

• Description length within the limited model,

• Number of states,

• Number of productions,

• Number of nonterminals,

• Number of states times number of stack-symbols.

F11r a given language (set ) L, how does L's descriptional complexity change witl1 r~gard to the underlying J escriptive model? Or more specificaUy:

How short, how succinct, how simple or how economical can the descnpt.i0ns of automata, graruruars an<l programs be if we allow the usage of concepts such as

• arbitrary or limited nondeterminism,

• arbitrary or limited ambiguity,

• arbitrary or limited rlelay in error-recovery.

• arbitrary or limitell lookahead,

• different uotions of acceptanc.e?

25

Page 26: Descriptional Complexity Descriptional Complexity

References:

0 José L. Balczizar:

J .L. Balcazar, A. Lozano, J . Toran: The Complexity of Algorithmic Prob-lems on Succinct Instances, in: R. Baeza-Yabes, U. Marber (eds.), Com-puter Science, Plenum Press 1992, 351-377

0 Ronald V. Book:

R.V. Book, J. Lutz, K. Wagner in STACS-92, Springer, LNCS 577, 1992(to appear in Math. Systems Theory, late 1993 or early 1994)

0 Alexander Botta:

A. Botta, Regularity and Structure, Proc. of Nat. Conf. on Arti�cial Intel-ligence, 1991

o Imre Csiszär:

I. Csiszär and J. Korner� Information Theory: Coding Theorems for Dis-crete Memoryless Systems, Academic Press, 1981

I. Csiszar, Why least squares and maximum entropy? An axiomatic ap-proach to inference for linear inverse problems. The Annals of Statistics,Vol.19, 2032-2066, 1991.

o Carsten Damm:

C. Damm, The Complexity of symmetric functions in Parity Normal Form,in: Mathematical Foundations of Computer Science, LNCS 452, 232-238,1990

o Joachim Dengler:

J . Dengler, Estimation of Discontinous Displacement Vector Fields withthe Minimum Description Length Criterion, Proc. CVPR�91, 276-282, alsoas MIT-AI-Memo, Oct. 1990

o J osep Dfaz:

J . Diaz, Query Complexity of learning DFA, Technical Report LSI 93/12.Accepted to Fifth Generation Computing

0 Peter Gäcs:

Peter Gacs, The Boltzmann entropy and randomness tests. Technical re-port, CWI (Center for Mathematics and Computer Science), Kruislaan413, 1098SJ Amsterdam, The Netherlands, 1993.

Peter Gacs, Self-correcting two-dimensional arrays. In Silvio Micali, editor,Randomness in Computation, pages 223-326, J AI Press, Greenwich, Conn.,1989.

26

References:

• Jose L. Balcazar:

J .L. Balcazar, A. Lozano, J. Toran: The Complexity of Algorithmic Prob­lems on Succinct Instances, in: R. Baeza-Yabes, U. Marber (eds.), Com­puter Science, Plenum Press 1992, 351-377

• Ronald V. Book:

RV. Book, J. Lutz, K. Wagner in STACS-92, Springer, LNCS 577, 1992 (to appear in Math. Systems Theory, late 1993 or early 1994)

• Alexander Botta:

A. Botta, Regularity and Structure, Proc. of Nat. Conf. on Artificial Intel­ligence, 1991

• Imre Csiszar:

I. Csisza.r and J. Korner, Information Theory: Coding Theorems for Dis­crete Memoryless Systems, Academic Press, 1981

I. Csisza.r, Why least squares and maximum entropy? An axiomatic ap­proach to inference for linear inverse problems. The Annals of Statistics, Vol.19, 2032-2066, 1991.

• Carsten Damm:

C. Damm, The Complexity of symmetric functions in Parity Normal Form, in: Mathematical Foundations of Computer Science, LNCS 452, 232-238, 1990

• Joachim D engler:

J. Dengler, Estimation of Discontinous Displacement Vector Fields with the Minimum Description Length Criterion, Proc. CVPR'91, 276-282, also as MIT-AI-Memo, Oct. 1990

• Josep Diaz:

J . Diaz, Query Complexity of learning DFA, Technical Report LSI 93/ 12. Accepted to Fifth Generation Computing

• P eter Gacs:

Peter Ga.cs, The Boltzmann entropy and randomness tests. Technical re­port, CWI (Center for Mathematics and Computer Science), Kruislaan 413, 1098SJ Amsterdam, The Netherlands, 1993.

Peter Ga.cs, Self-correcting two-dimensional arrays. In Silvio Micali, editor, Randomness in Computation, pages 223-326, JAI Press, Greenwich, Conn., 1989.

26

Page 27: Descriptional Complexity Descriptional Complexity

o Läszlö Gerencsér:

L. Gerencsér, J. Rissanen, Asymptotics of Predictive Stochastic Complex-ity, in: D. Brillinger, P. Caines, J. Geweke, M. Rosenblatt, M.S. Taggu,New Directions of Time Series Analysis, Part 2, Springer IMA Series 1993,93-112

L. Gerencsér, AR(oo) Estimation and Nonparametric Stochastic Complex-ity, IEEE Trans. In.f. Theory 38, 1768-1778

o Jozef Gruska:

J. Gruska, M. Napoli, D. Parente, On the minimization of Systolic BinaryTree Automata, Tech. Rep., University of Salerno, 1993

o Tao J iang:

T. Jiang, M. Li, k-One way Heads cannot do String Matching, ACMSTOC�93, San Diego, CA

0 Helmut Jiirgensen:

C. Calude, H. Jiirgensen, Randomness as an Invariant for Number Rep-resentation, Report 339, Department of Computer Science, University ofWestern Ontario, 1993

o Ming Li:

T. Jiang and M. Li, On the Approximation of Shortest Common Sequencesand Longest Common Subsequences, manuscript, 1992

M. Li and P. Vitanyi, Kolmogorov complexity and its applicatnmzs. InHandbook of Theoretical Computer Science, (Managing Editor: J. Var.Leeuwen), 1990, pp. 189-254, Elsevier/MIT Press.

o Jack H. Lutz:

J.H. Lutz, Almost Everywhere High Nonuniform Complexity. Journal �fComputer and System Sciences 44 (1992), 220-258

Juedes, Lathrop, and Lutz, Computational Depth and Re<luciliili*.y (toappear in ICALP�93 and Theoretical Computer Science)

0 Christoph Meinel:

J. Gergov, Ch. Meinel, Analysis and Manipulation of Boolean Fxiiictions,in Terms of Decision Graphs, Proc. WG�92, LNCS 65T, 310-320

J. Gergov, Ch. Meinel, Frontiers of Feasible and Probabilistic FeasibleBoolean Manipulation, Proc. STACS�93, LNCS 665, 576-585, 1993

27

• Laszlo Gerencser:

L. Gerencser , J. Rissanen , Asymptotics of Predictive Stochastic Complex­ity, in: D. Brillinger , P. Caines, J . Geweke, M. Rosenblatt., M.S. Taggu, New Directions of Time Series Analysis, Part 2, Springer IMA Series 1993, 93-11~

L. Gerencser, AR( oo) Estimation and Non parametric Stochastic Complex­ity, IEEE Trans. lnf. Theory 38, 1768-1778

• Jozef Gruska:

J . Gruska, M. Napoli, D. Parente, On the minimization of Systolic Binary Tree Automata, Tech. Rep. , University of Salerno, 1993

• Tao Jiang:

T . Jiang, M. Li , k-One way Heads cannot do String Matc!Ling, ACM STOC'93, San Diego, CA

• Helmut J iirgensen:

C. Calude, H. Jurgensen, Randomness as an Invariant for Number Rf'p· resentation, Report 339, Department of Computer Science, University of Western Ontario, 1993

• Ming Li:

T . Jia.ng a.nd M. Li, On the Approximation of Shortest Comnn,11 Seque11ces and Longest Common Subsequences, manuscript , 19112

M. Li and P. Vitanyi , Kolmogorov complexity and it.s app!JC,1.twn s . 111 Handbook of Theoretical Computer Science, (Managin~ Editor · J. Va.~ Leeuwen), 1990, pp. 189-254, Elsevier/ MIT Press.

• Jack H . Lutz:

J.H. Lutz, Almost Everywhere High Nonuniform CompleXJt.y, .lt,11rn;.i,l .,f

Computer and System Sciences 44 ( 1992), 220-258

Juedes, Lathrop, and Lutz, Computational Depth and Redut·ihil1iy (!1> appear in ICALP'93 and Theoretical Computer Sciencf')

• Christoph Meinel:

J. Gergov, Ch. Meinel, Analysis and Manipulation c,f Boolean Fu11ct.10ns ,

in Terms of Decision Graphs, Proc. WG'92, LNCS f."i -;" , 310-320

J. Gergov , Ch. Meinel, Frontiers of Feasible and P robabilistic Feasiblt> Boolean Manipulation, Proc. STACS'93, LNCS 665, 576-585, 1993

27

Page 28: Descriptional Complexity Descriptional Complexity

o Pekka Orponen:

P. Orponen� Ker-I Ko, U. Schöning, O. Watanabe, Instance Complexity.Report A-1990-6, Univ. of Helsinki, Dept. of Computer Science; to appearin J. ACM.

0 Edwin Pednault:

E. Pednault, Inferring Probabilistic Theories from Data, Proc. 7th Na-tional Con.f. on Arti�cial Intelligence, 624-628, Morgan Kaufmann Pub-lishers (San Mateo, California), 1988

E. Pednault, Some Experiments in Applying Inductive Inference Principlesto Surface Reconstruction, Proc. 11th Intl. Joint Conf. on Artificial Intelli-gence, 1603-1609, Morgan Kaufmann Publishers (San Mateo, California),1989

0 Richard E. Stearns:

R.E. Stearns, H.B. Hunt III, An Algebraic Model for Combinatorial Prob-lems (submitted to SICOMP, also Techn. Rep. 92-9, Computer ScienceDepartment, University at Albany, 1992)

R.E. Stearns, H.B. Hunt III, Power Indicies and Easier Hard Problems,Math. Syst. Theory 23, 209-225, 1990

0 Vladimir A. Uspensky:

V.A. Uspensky, Complexity and Entropy: an Introduction to the Theory ofKohnogorov Complexity, in: O. Watanabe (ed.), Kolmogorov Complexityand Computational Complexity, Springer 1992

V.A. Uspensky, A.L. Semenov, Algorithms: Main Ideas and Applications,Kluwer Academic Publishers (Dordrecht, the Netherlands) 1993

o Michiel van Lambalgen:

M. van Lambalgen, Independence, Randomness and the Axiom of Choice,Journal of Symbolic Logic, December 1992

0 Paul Vitänyi:

M. Li and P. Vitényi, An Introduction to Kolmogorov complexity and itsapplications, Springer-Verlag, $59.00 (US). 550 pages, 1993. ISBN 0-387-94053-7

C. Bennett, P. Gacs, M. Li, P. Vita�.nyi, W. Zurek, Thermodynamics ofComputation and Information Distance, 25th ACM Symp. Theory of Com-puting, 1993

28

• Pekka Orponen:

P. Orponen, Ker-I Ko, U. Schoning, 0 . Watanabe, Instance Complexity. Report A-1990-61 Univ. of Helsinki, Dept. of Computer Science; to appear in J . ACM.

• Edwin Pednault:

E. Pedna.ult , Inferring Probabilistic Theories from Data, Proc. 7th Na­tional Conf. on Artificial Intelligence, 624-628, Morgan Kaufmann Pub­lishers (San Mateo, California.) , 1988

E. Pedna.ult, Some Experiments in Applying Inductive Inference Principles to Surface Reconstruction, Proc. 11th Intl. Joint Conf. on Artificial Intelli­gence, 1603-1609, Morgan Kaufmann Publishers (San Mateo, California.), 1989

• Richard E. Stearns:

R.E. Stearns, H.B. Hunt III, An Algebraic Model for Combinatorial Prob­lems (submitted to SICOMP, also Techn. Rep. 92-9, Computer Science Department, University at Albany, 1992)

R.E. Stearns, H.B. Hunt III, Power lndicies and Easier Hard Problems, Math. Syst. Theory 23, 209-2251 1990

• Vladimir A. Uspensky:

V.A. Uspensky, Complexity and Entropy: an Introduction to the Theory of Kolmogorov Complexity, in: 0 . Watanabe (ed.), Kolmogorov Complexity and Computational Complexity, Springer 1992

V.A. Uspensky, A.L, Semenov, Algorithms: Main Ideas and Applications, Kluwer Academic Publishers (Dordrecht, the Netherlands) 1993

• Michiel van Lambalgen:

M. van Lambalgen, Independence, Randomness and the Axiom of Choice, Journal of Symbolic Logic, December 1992

• Paul Vitanyi:

M . Li and P. Vita.nyi, An Introduction to Kolmogorov complexity and its applications, Springer-Verlag, $59.00 (US). 550 pages, 1993. ISBN 0-387-94053-7

C. Bennett, P. Ga.cs, M. Li, P. Vita.nyi, W. Zurek, Thermodynamics of Computation and Information Distance, 25th ACM Symp. Theory of Com­puting, 1993

28

Page 29: Descriptional Complexity Descriptional Complexity

0 Andreas Weber:

T. Head, A. Weber, Deciding code related properties by means of �nitetransducers, in: Sequences II, (R. Capocelli, A. De Santis, U. Vaccaro,eds.), Springer 1993, pp. 260-272

A. Weber, On the valuedness of finite transducers, Acta Informatica 27,749-780, 1990

A. Weber, R. Klemm, Economy of description for single-valued transducers,preprint, 1993

o Detlef Wotschke:

J. Goldstine� Ch.M.R. Kintala, D. Wotschke, On Measuring Nondetermin-ism in Regular Languages, Information and Computation, 86 (1990), 179- 194

J. Goldstine, H. Leung, D. Wotschke, On The Relation between Ambiguityand Nondeterminism in Finite Automata, Information and Computation,100 (1992), 261 - 270

C.M.R. Kintala, C. Pun, D. Wotschke, Concise Representations of RegularLanguages, ( 1992), will appear in Information and Computation.

29

• Andreas Weber:

T. Head, A. Weber, Deciding code related properties by means of finite transducers, in: Sequences II, (R. Capocelli, A. De Santis, U. Vaccaro, eds.) , Springer 1993, pp. 260- 272

A. Weber, On the valuedness of finite transducers, Acta lnformatica 27, 749- 780, 1990

A. Weber, R. Klemm, Economy of description for single-valued transducers, preprint, 1993

• Detlef Wotschke:

J. Goldstine, Ch.M.R. Kintala, D. Wotschke, On Measuring Nondetermin­ism in Regular Languages, Information and Computation, 86 (1990) , 179 - 194

J . Goldstine, H. Leung, D. Wotschke, On The Relation between Ambiguity and Nondeterminism in Finite Automata, Information and Computation , 100 (1992), 261 - 270

C.M.R. Kintala, C. Pun, D. Wotschke, Concise Representations of Regular Languages, (1992), will appear in Information and Computation.

29

Page 30: Descriptional Complexity Descriptional Complexity

Adresses:

José L. Balcézar Carsten Damm

Univ. Politec. de Catalunya Universität TrierDept. L.S.I. (Ed. FIB) Fachbereich IV - InformatikPau Gargallo 5 Postfach 3825E-08028 Barcelona, Spain W-5500 [email protected] [email protected]: +34-3-401-7013 /6944

Jürgen DassowAndrew R. Barron TU MagdeburgYale University Fakultät für InformatikDepartment of Statistics Postfach 4120Yale Station O-3010 MagdeburgNew Haven, CT [email protected] tel.: +49�39l-5592-3543

Ronald V. Book Joachim DenglerUniversity of California at Santa Bar- Vision Systemsbara Am Miihlrain 3

Department of Mathematics W-6903 NeckargemiindSanta Barbara CA 93106, USA [email protected]@math.ucsb.edu tel.: +49�6223�6716

tel.: +1-805-893-2778 J osep Diaz

Alexander Botta Universidad Politecnica de Cataluna

Adaptive Computing Inc. Dept. L.S.I. (Ed. FIB)1310 Dickerson Road Pau Gargallo 5Teaneck NJ 07666, USA E-08028 Barcelonatel.: +l-201-837-1153 Spain

[email protected]

Imre Csiszär

Hungarian Academy of Sciences Peter GäcsMathematical Institute Boston UniversityPf. 127 Computer Science DeptartmentH�1364 Budapest, Hungary III Cummington [email protected] Boston MA 02215

tel.: +36-1-1182-875 USA

[email protected].: +1-617-353�20 15

30

A dresses:

Jose L. Balcazar Univ. Politec. de Catalunya Dept. L.S.I. (Ed. FIB) Pau Gargallo 5 E-08028 Barcelona, Spain [email protected] tel: +34-3-401-7013 / 6944

Andrew R. Barron Yale University Department of Statistics Yale Station New Haven, CT USA

Ronald V. Book University of California at Santa Bar­bara. Department of Mathematics Santa. Barbara CA 93106, USA [email protected] tel.: + 1-805-893-2778

Alexander Botta Adaptive Computing Inc. 1310 Dickerson Road Tea.neck NJ 07666, USA tel.: +1-201-837-1153

lmre Csiszar Hungarian Academy of Sciences Mathematical Institute Pf. 127 H-1364 Budapest, Hungary [email protected] tel.: +36-1-1182-875

Carsten Damm Universitii.t Trier Fachbereich IV - lnformatik Postfach 3825 W-5500 Trier [email protected]

Jurgen Dassow TU Magdeburg Fakultii.t fiir lnformatik Postfach 4120 0 -3010 Magdeburg dassow@dm d tu 11. bi tnet tel. : + 49-391-5592-3543

Joachim Dengler Vision Systems Am Miihlrain 3 W-6903 N ecka.rgemiind [email protected] tel.: +49-6223-6716

Josep Diaz U niversidad Politecnica de Cataluna Dept. L.S.I. (Ed. FIB) Pau Ga.rga.llo 5 E-08028 Barcelona Spain [email protected]

Peter Gacs Boston University Computer Science Depta.rtment III Cummington St. Boston MA 02215 USA [email protected] tel.: +1-617-353-20 15

30

Page 31: Descriptional Complexity Descriptional Complexity

Laszlé Gerencsér

Hungarian Academy of SciencesComputer and Automation InstituteP. O. Box 63

Budapest H-1502Hungary

[email protected] tel.: +36-1-1667-483

Jozef Gruska

Universität HamburgFB Informatik

Vogt-Kolln-Str. 30W-2000 Hamburg 54gruska@ . informatik . uni-hamburg.detel.: +49--10-54715-242(5672)

Juris Hartmanis

Cornell UniversityDepartment. of Computer Science4130 Upson HallIthaca NY 14853-7501

USA

[email protected]

Günter Hotz

Universität des Saarlandes

Fachbereich 14 - Informatik

Im Stadtwald 15

W-6600 Saarbrücken 11

[email protected]�sb.de

tel.: +49-681-302 2414

Tao JiangMc Master UniversityDepartment of Computer ScienceHamilton Ontario L8S 4K1

Canada

[email protected].: +1-416-525-9140 /7233

Helmut J ürgensenUniversity of Western OntarioDepartment of Computer ScienceMiddlesex CollegeLondon Ontario N 6A 5B7

Canada

[email protected]

tel.: +1-519-661-3560

Hing LeungNew Mexico State UniversityDepartment of Computer ScienceLas Cruces NM 88003, USA

[email protected] tel.: +1-505-646 1038

Ming LiUniversity of WaterlooDepartment of Computer ScienceWaterloo Ontario N2L 3G1

Canada

[email protected]

Jack H. Lutz

Iowa State UniversityDepartment of Computer ScienceAmes IA 50011, USA

[email protected] tel.: + 1-515-294-9941

Christoph MeinelUniversität Trier

Fachbereich IV � Informatik

Postfach 3825

W-5500 Trier

[email protected] te1.: +49-651-201-3954

31

Laszlo Gerencser Hungarian Academy of Sciences Computer and Automation Institute P. 0 . Box 63 Budapest H-1502 Hungary [email protected] tel.: +36-1-1667-483

Jozef Gruska Universitat Hamburg FB Informatik Vogt-Kolln-Str. 30 W-2000 Hamburg 54 gruska@. informatik. uni-ham burg.de tel.: +49-40-54715-242{5672)

Juris Hartmanis Cornell University Department of Computer Science 4130 Upson Hall Ithaca NY 14853-7501 USA [email protected]

Gunter Hotz U niversita.t des Saarlandes Fachbereich 14 - lnformatik Im Stadtwald 15 W-6600 Saarbriicken 11 [email protected] tel. : + 49-681-302 2414

Tao Jiang Mc Master University Department of Computer Science Hamilton Ontario L8S 4Kl Ca.nada jiang@maccs .mcmaster. ea tel.: + 1-416-525-9140 / 7233

Helmut Jurgensen University of Western Ontario Department of Computer Science Middlesex College London Ontario N6A 5B7 Canada [email protected] tel. : +1-519-661-3560

Hing Leung New Mexico State University Department of Computer Science Las Cruces NM 88003, USA [email protected] tel.: + 1-505-646 1038

Ming Li University of Waterloo Department of Computer Science Waterloo Ontario N2L 3GI Canada [email protected]

Jack H. Lutz Iowa State University Department of Computer Science Ames IA 50011, USA [email protected] tel.: +1-515-294-9941

Christoph Meinel Universita.t Trier Fachbereich IV - lnformatik Postfach 3825 W-5500 Trier [email protected] tel.: +49-651-201-3954

31

Page 32: Descriptional Complexity Descriptional Complexity

Pekka Orponen Michiel van LambalgenUniversity of Helsinki Universiteit van AmsterdamDept. of Computer Science Faculteit Wiskunde en InformaticaP.O.Box 26 Plantage Muidergracht 24SF-00014 Helsinki , Finland N L-1081 TV [email protected].� The Netherlandstel.: +358-0-708-4224 [email protected]

tel.: +31 20 525 6060

Edwin Pednault

AT&T Bell Laboratories Paul Vitényi4E630 CWI - Mathematisch Centrum101 Crawfords Corner Road Kruislaan 413

Holmdel NJ 07733, USA NL-1098 S3 [email protected] The Netherlandstel.: +1-908-949-1074 [email protected]

tel.: +31-20-592 4124

Richard E. Stearns

SUNY at Albany Andreas WeberDept. of Computer Science Universität Frankfurt1400 Washington Avenue Fachbereich Informatik (20)Albany NY 12222 Robert-Mayer-Str. 11-15USA W-6000 Frankfurt 11

[email protected] [email protected]�frankfurt.detel.: +1-518-442-4275 tel.: +49-69-798 8176

John Tromp Detlef WotschkeCWI � Mathematisch Centrum Universität Frankfurt

Kruislaan 413 Fachbereich Informatik (20)NL-1098 SJ Amsterdam Robert-Mayer-Str. 11-15The Netherlands W-6000 Frankfurt 11

[email protected] [email protected].: +31-20-592-4078 tel.: +49-69-798 3800

Vladimir A. UspenskyMoscow State Lomonosow UniversityDepartment of Mathematical Logicand the Theory of AlgorithmsV-234 Moscow

GUS

[email protected] uspensky@int . glas.apc.org

32

Pekka Orponen University of Helsinki Dept. of Computer Science P.O.Box 26 SF-00014 Helsinki , Finland [email protected] tel. : +358-0-708-4224

Edwin Pednault AT&T Bell Laboratories 4E630 101 Crawfords Corner Road Holmdel NJ 07733, USA [email protected] tel.: +1-908-949-1074

Richard E. Stearns SUNY at Albany Dept. of Computer Science 1400 Washington A venue Albany NY 12222 USA [email protected] tel. : + 1-518-442-4275

John Tromp CWI - Mathematisch Centrum Kruislaan 413 NL-1098 SJ Amsterdam The Netherlands [email protected] tel.: + 31-20-592-4078

Vladimir A. Uspensky Moscow State Lomonosow University Department of Mathematical Logic and the Theory of Algorithms V-234 Moscow GUS [email protected] uspensky@int .gla.s .apc.org

Michiel van Lambalgen Universiteit van Amsterdam Faculteit Wiskunde P.n lnformatica Plantage Muidergracht 24 NL-1081 TV Amsterdam The Netherlands [email protected] tel. : +31 20 525 6060

Paul Vitanyi CWI - Mathematisch Centrum Kruislaan 413 NL-1098 SJ Amsterdam The Netherlands [email protected] tel.: +31-20-592 4124

Andreas Weber Universitiit Frankfurt Fachbereich lnformatik (20) Robert-Mayer-Str. 11-15 W-6000 Frankfurt 11 [email protected] .uni-frankfurt.de tel. : + 49-69-798 8176

Detlef Wotschke Universitiit Frankfurt Fachbereich Informatik (20) Robert-Mayer-Str. 11-15 W-6000 Frankfurt 11 [email protected] tel.: + 49-69-798 3800

32

Page 33: Descriptional Complexity Descriptional Complexity
Page 34: Descriptional Complexity Descriptional Complexity

Zuletzt erschienene und geplante Titel:

B. Booß, W. Coy, J.-M. P�üger (editors):Limits of Information-technological Models, Dagstuhl-Seminar-Report; 31, 10.-14.2.92 (9207)

N. Habermann, W.F. Tichy (editors):Future Directions in Software Engineering, Dagstuhl-Seminar-Fleport; 32; 17.2.-21.2.92 (9208)

R. Cole, E.W. Mayr, F. Meyer auf der Heide (editors):Parallel and Distributed Algorithms; Dagstuhl-Seminar-Report; 33; 2.3.-6.3.92 (9210)

P. Klint, T. Reps, G. Snelting (editors):Programming Environments; Dagstuhl-Seminar-Report; 34; 9.3.-13.3.92 (9211)

H.-D. Ehrich, J.A. Goguen, A. Sernadas (editors):Foundations of Information Systems Specification and Design; Dagstuhl-Seminar-Report; 35;16.3.-19.3.9 (9212)

W. Damm, Ch. Hankin, J. Hughes (editors):Functional Languages:Compiler Technology and Parallelism; Dagstuhl-Seminar-Report; 36; 23.3.�27.3.92 (9213)

Th. Beth, W. Dittie, G.J. Simmons (editors):System Security; Dagstuhl-Seminar-Report; 37; 30.3.-3.4.92 (9214)

C.A. Ellis, M. Jarke (editors):Distributed Cooperation in Integrated Information Systems; Dagstuhl-Seminar-Report; 38; 5.4.-9.4.92 (9215)

J. Buchmann, H. Niederreiter, A.M. Odlyzko, H.G. Zimmer (editors):Algorithms and Number Theory, Dagstuhl-Seminar-Report; 39; 22.06.-26.06.92 (9226)

E. Börger, Y. Gurevich, H. KIeine-Büning, M.M. Richter (editors):Computer Science Logic, Dagstuhl-Seminar-Report; 40; 13.07.-17.07.92 (9229)

J. von zur Gathen, M. Karpinski, D. Kozen (editors):Algebraic Complexity and Parallelism, Dagstuhl-Seminar-Report; 41; 20.07.-24.07.92 (9230)

F. Baader, J. Siekmann, W. Snyder (editors):6th lntemational Workshop on Unification, Dagstuhl-Seminar-Report; 42; 29.07.-31.07.92 (9231)

J.W. Davenporl, F. Krückeberg, Ft.E. Moore, S. Rump (editors):Symbolic, algebraic and validated numerical Computation, Dagstuhl-Seminar-Report; 43; 03.08.-07.08.92 (9232)

R. Cohen, Fl. Kass, C. Paris, W. Wahlster (editors):Third International Workshop on User Modeling (UM&#39;92), Dagstuhl-Seminar-Report; 44; 10.-13.8.92 (9233)

R. Reischuk, D. Uhlig (editors):Complexity and Realization of Boolean Functions, Dagstuhl-Seminar-Fteport; 45; 24.08.-28.08.92(9235)

Th. Lengauer, D. Schornburg, M.S. Waterman (editors):Molecular Bioinformatics, Dagstuhl-Seminar-Report; 46; 07.09.-11.09.92 (9237)

V.R. Basili, H.D. Rombach, R.W. Selby (editors):Experimental Software Engineering Issues, Dagstuhl-Seminar-Report; 47; 14.-18.09.92 (9238)

Y. Diltrich, H. Hastedt, P. Schete (editors):Computer Science and Philosophy, Dagstuhl-Seminar-Report; 48; 21 .09.-25.09.92 (9239)

R.P. Daley, U. Furbach, K.P. Jantke (editors):Analogical and Inductive Inference 1992 , Dagstuhl-Seminar-Report; 49; 05.10.-09.10.92 (9241)

E. Novak, St. Smale, J.F. Traub (editors):Algorithms and Complexity for Continuous Problems, Dagstuhl-Seminar-Report; 50; 12.10.-16.10.92 (9242)

Zuletzt erschienene und geplante Titel:

B. BooB, W . Coy, J.-M. Pfluger (editors): Limits of Information-technological Models, Dagstuhl-Seminar-Report; 31 , 10.-14.2.92 (9207)

N. Habermann, W.F. Tichy (editors): Future Directions in Software Engineering, Dagstuhl-Seminar-Report; 32; 17.2.-21 .2.92 (9208)

R. Cole, E.W. Mayr, F. Meyer auf der Heide (editors): Parallel and Distributed Algorithms; Dagstuhl-Seminar-Report; 33; 2.3.-6.3.92 (9210)

P. Klint. T. Reps, G . Snelting (editors): Programming Environments; Dagstuhl-Seminar-Report; 34; 9.3.-13.3.92 (9211)

H.-D. Ehrich, J.A. Goguen, A. Sernadas (editors): Foundations of Information Systems Specification and Design; Dagstuhl-Seminar-Report; 35; 16.3.-19.3.9 (9212)

W. Damm, Ch. Hankin, J. Hughes (editors): Functional Languages: Compiler Technology and Parallelism; Dagstuhl-Seminar-Report; 36; 23.3.-27.3.92 (9213)

Th. Beth, W. Diffie, G.J. Simmons (editors): System Security; Dagstuhl-Seminar-Report; 37; 30.3.-3.4.92 (9214)

C.A. Ellis, M. Jarke (editors): Distributed Cooperation in Integrated Information Systems; Dagstuhl-Seminar-Report; 38; 5.4.-9.4.92 (9215)

J. Buchmann, H. Niederreiter, A.M. Odlyzko, H.G. Zimmer (editors): Algorithms and Number Theory, Dagstuhl-Seminar-Report; 39; 22.06.-26.06.92 (9226)

E. Borger, Y . Gurevich, H. Kleine-Boning, M.M. Richter (editors): Computer Science Logic, Dagstuhl-Seminar-Report; 40; 13.07.-17.07.92 (9229)

J. von zur Gathen, M. Karpinski, D. Kozen (editors): Algebraic Complexity and Parallelism, Dagstuhl-Seminar-Report; 41 ; 20.07.-24.07.92 (9230)

F. Baader, J. Siekmann, W. Snyder (editors): 6th International Workshop on Unification, Dagstuhl-Seminar-Report; 42; 29.07. -31.07 .92 (9231)

J.W. Davenport, F. Kruckeberg, R.E. Moore, S. Rump (editors): Symbolic, algebraic and validated numerical Computation, Dagstuhl-Seminar-Report; 43; 03.08.-07.08.92 (9232)

R. Cohen, A. Kass, C. Paris, W. Wahlster (editors): Third International Workshop on User Modeling (UM'92), Dagstuhl-Seminar-Report; 44; 10.-13.8.92 (9233)

R. Reischuk, D. Uhlig (editors): Complexity and Realization of Boolean Functions, Dagstuhl-Seminar-Report; 45; 24.08.-28.08.92 (9235)

Th. Lengauer, D. Schomburg, M.S. Waterman (editors): Molecular Bioinformatics, Dagstuhl-Seminar-Report; 46; 07.09.-11 .09.92 (9237)

V.R. Basili, H.D . Rombach, R.W. Selby (editors): Experimental Software Engineering Issues, Dagstuhl-Seminar-Report; 47; 14.-18.09.92 (9238)

Y. Dittrich, H. Hastedt, P. Schefe (editors): Computer Science and Philosophy, Dagstuhl-Seminar-Report; 48; 21.09.-25.09.92 (9239)

R.P. Daley, U. Furbach, K.P. Jantke (editors): Analogical and Inductive Inference 1992 , Dagstuhl-Seminar-Report; 49; 05.10.-09.10.92 (9241)

E. Novak, St. Smale. J.F. Traub (editors): Algorithms and Complexity for Continuous Problems, Dagstuhl-Seminar-Report; 50; 12.10.-16.10.92 (9242)

Page 35: Descriptional Complexity Descriptional Complexity

J. Encarnacao, J. Foley (editors):Multimedia - System Architectures and Applications, Dagstuhl-Seminar-Report; 51; 02.11.-06.1 1.92 (9245)

F.J. Rammig, J. Staunstrup, G. Zimmermann (editors):Self-Timed Design, Dagstuhl-Seminar-Report; 52; 30.11.-04.12.92 (9249 )

B. Courcelle, H. Ehrig, G. Rozenberg, H.J. Schneider (editors):Graph-Transformations in Computer Science, Dagstuhl-Seminar-Report; 53; 04.01.-08.01.93(9301)

A. Arnold, L. Priese, R. Vollmar (editors):Automata Theory: Distributed Models, Dagstuhl-Seminar-Report; 54; 11.01 .-15.01 .93 (9302)

W. Cellary, K. Vidyasankar , G. Vossen (editors):Versioning in Database Management Systems, Dagstuhl-Seminar-Report; 55; 01.02.-05.02.93(9305)

B. Becker, R. Bryant, Ch. Meinel (editors):Computer Aided Design and Test , Dagstuhl�Seminar-Report; 56; 15.02.-19.02.93 (9307)

M. Pinkal, R. Scha, L. Schubert (editors):Semantic Formalisms in Natural Language Processing, Dagstuhl-Seminar-Report; 57; 23.02.-26.02.93 (9308)

W. Bibel, K. Furukawa, M. Stickel (editors):Deduction , Dagstuhl-Seminar-Report; 58; 08.03.-12.03.93 (9310)

H. Alt, B. Chazelle, E. Welzl (editors):Computational Geometry, Dagstuhl-Seminar-Report: 59; 22.03.-26.03.93 (9312)

H. Kamp, J. Pustejovsky (editors):Universals in the Lexicon: At the Intersection of Lexical Semantic Theories, Dagstuhl-Seminar-Report; 60; 29.03.-02.04.93 (9313)

W. Strasser, F. Wahl (editors):Graphics & Robotics, Dagstuhl-Seminar-Report; 61; 19.04.-22.04.93 (9316)

C. Beeri, A. Heuer, G. Saake, S. Urban (editors):Formal Aspects of Object Base Dynamics , Dagstuhl-Seminar-Report; 62: 26.04.-30.04.93 (9317)

R. V. Book, E. Pednault, D. Wotschke (editors):Descriptional Complexity, Dagstuhl-Seminar-Report; 63; 03.05.-07.05.93 (9318)

H.-D. Ehrig, F. von Henke, J. Meseguer, M. Wirsing (editors):Specification and Semantics, Dagstuhl-Seminar-Report; 64; 24.05.-28.05.93 (9321)

M. Droste, Y. Gurevich (editors):Semantics of Programming Languages and Algebra, Dagstuhl-Seminar-Report: 65; 07.06.1 1.06.93 (9323)

Ch. Lengauer, P. Quinton, Y. Robert, L. Thiele (editors):Parallelization Techniques for Uniform Algorithms, Dagstuhl-Seminar-Report; 66; 21 .06.-25.06.93(9325)

G. Farin, H. Hagen, H. Noltemeier (editors):Geometric Modelling, Dagstuhl-Seminar-Report; 67; 28.06.�02.07.93 (9326)

Ph. Flajolet, R. Kemp, H. Prodinger (editors):"Average-Case"-Analysis of Algorithms, Dagstuhl-Seminar-Report; 68; 12.07.-16.07.93 (9328)

J.W. Gray, A.M. Pitts, K. Sieber (editors):Interactions between Category Theory and Computer Science, Dagstuhl-Seminar-Report; 69:19.07.-23.07.93 (9329) -

D. Gabbay, H.-J. Ohlbach (editors):Automated Practical Reasoning and Argumentation, Dagstuhl-Seminar-Report; 70; 23.08.-27.08.93 (9334)

J. Encamac;ao, J. Foley (editors): Multimedia - System Architectures and Applications, Dagstuhl-Seminar-Report; 51 ; 02.11 .-06.11 .92 (9245)

F.J. Rammig, J. Staunstrup, G . Zimmermann (editors): Self-Timed Design, Dagstuhl-Seminar-Report; 52; 30.11 .-04.12.92 (9249)

B. Courcelle, H. Ehrig, G. Rozenberg, H.J. Schneider (editors): Graph-Transformations in Computer Science, Dagstuhl-Seminar-Report : 53; 04.01 .-08.01 .93 (9301)

A. Arnold, L. Priese, R. Vollmar (editors): Automata Theory: Distributed Models, Dagstuhl-Seminar-Report; 54; 11 .01.-15.01 .93 (9302)

W. Cellary, K. Vidyasankar , G. Vossen (editors): Versioning in Database Management Systems, Dagstuhl-Seminar-Report; 55; 01 .02.-05.02.93 (9305)

B. Becker, R. Bryant, Ch. Meinel (editors): Computer Aided Design and Test , Dagstuhl-Seminar-Report: 56; 15.02.-19.02.93 (9307)

M. Pinkal, R. Scha, L. Schubert (editors): Semantic Formalisms in Natural Language Processing, Dagstuhl-Seminar-Report; 57; 23.02.-26.02.93 (9308)

W. Bibel, K. Furukawa, M. Stickel (editors): Deduction , Dagstuhl-Seminar-Report; 58; 08.03.-12.03.93 (9310)

H. Alt, B. Chazelle, E. Welzl (editors): Computational Geometry, Dagstuhl-Seminar-Report: 59; 22.03.-26.03.93 (9312)

H. Kamp, J. Pustejovsky (editors): Universals in the Lexicon: At the Intersection of Lexical Semantic Theories, Dagstuhl-Seminar­Report; 60; 29.03.-02.04.93 (9313)

W. Strasser, F. Wahl (editors): Graphics & Robotics, Dagstuhl-Seminar-Report; 61 ; 19.04.-22.04.93 (9316)

C. Beeri, A. Heuer, G . Saake, S. Urban (editors): Formal Aspects of Object Base Dynamics , Dagstuhl-Seminar-Report ; 62: 26.04.-30.04.93 (9317)

R. V. Book, E. Pednault, D. Wotschke (editors): Descriptional Complexity, Dagstuhl-Seminar-Report; 63: 03.05.-07.05.93 (9318)

H.-0 . Ehrig, F. von Henke, J. Meseguer, M. Wirsing (editors): Specification and Semantics, Dagstuhl-Seminar-Report; 64; 24.05.-28.05.93 (9~21)

M. Droste, Y. Gurevich (editors) : Semantics of Programming Languages and Algebra, Dagstuhl-Seminar-Report: 65; 07.06. 11 .06.93 (9323)

Ch. Lengauer, P. Quinton, Y. Robert, L. Thiele (editors): Parallelization Techniques for Uniform Algorithms, Dagstuhl-Seminar-Report; 66; 21.06.-25.06.93 (9325)

G. Farin, H. Hagen, H. Nottemeier (editors): Geometric Modelling, Dagstuhl-Seminar-Report; 67; 28.06.-02.07.93 (9326)

Ph. Flajolet, A. Kemp, H. Prodinger (editors): "Average-Case"-Analysis of Algorithms, Dagstuhl-Seminar-Report; 68; 12.07.-16.07.93 (9328)

J.W. Gray, A.M. Pitts, K. Sieber (editors): Interactions between Category Theory and Computer Science. Dagstuhl-Seminar-Report ; 69: 19.07.-23.07.93 (9329)

D. Gabbay, H.-J. Ohlbach (editors): Automated Practical Reasoning and Argumentation, Dagstuhl-Seminar-Report; 70; 23.08.· 27.08.93 (9334)

Page 36: Descriptional Complexity Descriptional Complexity