From CLF to Lollimon

6
7/23/2019 From CLF to Lollimon http://slidepdf.com/reader/full/from-clf-to-lollimon 1/6 From a Concurrent Logical Framework to Concurrent Logic Programming Jeff Polakow April 11, 2006 LolliMon [9] is a new monadic concurrent linear logic programming language which grew out of the analysis of the operational semantics of term construction for the concurrent logical framework (CLF) [16, 15, 4]. Alternatively, LolliMon can be seen as an extension of the linear logic programming language Lolli [8] with monadically encapsulated synchronous [1] formulas. The use of a monad allows LolliMon to cleanly integrate backward chaining and forward chaining proof search in one language; forward chaining ”concurrent” executions are cleanly encapsulated, via the monad, inside backward chaining ”serial” execu- tions. This article attempts to elucidate the transition from logical framework to logic programming language; we will also point out some interesting aspects of our implementation and directions for future work. A prototype disrtibution of LolliMon is available online at:  www.cs.cmu.edu/~fp/lollimon. CLF conservatively extends the linear logical framework (LLF) [3], itself a linear extension of the Edinburgh logical framework (LF) [7], with constructs allowing for the elegant encoding of concurrent computations. The LF family of logical frameworks are all based on dependently typed lambda calculus and rely upon the notion of canonical forms to ensure adequacy of object system 1 representations; therefore these logical frameworks only contain types which admit canonical forms. The existence of canonical forms for the LF family of logical frameworks addi- tionally allows an operational semantics for term construction— finding a term of a given type— which corresponds to higher-order logic programming [13]. The type language of LF corresponds to (the freely generated fragment of) the formula language of  λ-Prolog [11] and the operational semantics of term con- struction, respectively proof search, for the two systems coincide. Similarly the type language of LLF corresponds to the formula fragment of Lolli, a linear extension of  λ-Prolog, and their operational semantics also coincide. Although  λ-Prolog and Lolli were developed before LF and LLF, one can view the two logic programming languages as implementations of the term con- struction algorithms for the two logical frameworks. The notion of uniform 1 The systems we wish to use the logical framework to reason about; this use of ”object system” is in contrast to ”meta system” (the logical framework) and has nothing to do with object-oriented design. 1

Transcript of From CLF to Lollimon

Page 1: From CLF to Lollimon

7/23/2019 From CLF to Lollimon

http://slidepdf.com/reader/full/from-clf-to-lollimon 1/6

From a Concurrent Logical Framework to

Concurrent Logic Programming

Jeff Polakow

April 11, 2006

LolliMon [9] is a new monadic concurrent linear logic programming languagewhich grew out of the analysis of the operational semantics of term construction

for the concurrent logical framework (CLF) [16, 15, 4]. Alternatively, LolliMoncan be seen as an extension of the linear logic programming language Lolli [8]with monadically encapsulated synchronous [1] formulas. The use of a monadallows LolliMon to cleanly integrate backward chaining and forward chainingproof search in one language; forward chaining ”concurrent” executions arecleanly encapsulated, via the monad, inside backward chaining ”serial” execu-tions. This article attempts to elucidate the transition from logical frameworkto logic programming language; we will also point out some interesting aspectsof our implementation and directions for future work. A prototype disrtibutionof LolliMon is available online at:   www.cs.cmu.edu/~fp/lollimon.

CLF conservatively extends the linear logical framework (LLF) [3], itself alinear extension of the Edinburgh logical framework (LF) [7], with constructsallowing for the elegant encoding of concurrent computations. The LF familyof logical frameworks are all based on dependently typed lambda calculus andrely upon the notion of canonical forms to ensure adequacy of object system1

representations; therefore these logical frameworks only contain types whichadmit canonical forms.

The existence of canonical forms for the LF family of logical frameworks addi-tionally allows an operational semantics for term construction— finding a termof a given type— which corresponds to higher-order logic programming [13].The type language of LF corresponds to (the freely generated fragment of) theformula language of  λ-Prolog [11] and the operational semantics of term con-struction, respectively proof search, for the two systems coincide. Similarly thetype language of LLF corresponds to the formula fragment of Lolli, a linearextension of  λ-Prolog, and their operational semantics also coincide.

Although   λ-Prolog and Lolli were developed before LF and LLF, one canview the two logic programming languages as implementations of the term con-struction algorithms for the two logical frameworks. The notion of uniform

1The systems we wish to use the logical framework to reason about; this use of ”objectsystem” is in contrast to ”meta system” (the logical framework) and has nothing to do withobject-oriented design.

1

Page 2: From CLF to Lollimon

7/23/2019 From CLF to Lollimon

http://slidepdf.com/reader/full/from-clf-to-lollimon 2/6

proof [12], which is central to Prolog style logic programming and correspondsto canonical forms via the Curry-Howard isomorphism (and a translation from

natural deduction to sequent calculus), provides the necessary mechanism forfinding canonical terms of a given type. Similarly techniques for linear resourcemanagement [2, 14, 10], which constrain the non-determinism of linear contextsplitting inherent in backward chaining linear logic proof search, provide thenecessary machinery for tractable linear lambda term construction.

A significant property of the type systems for LF and LLF is that the shapeof a term, of non-atomic type, is independent of the current variable context;types with this property are called asynchronous [1]. Thus a term of type  α  →  β 

will have the form  λx :α.m, and a term of type  α &β  will have the form  m,n;this allows proof search in  λ-Prolog and Lolli to be goal directed. However thetype language of CLF, which includes synchronous types such as  ⊗, does nothave this property. A sequent derivation of  α⊗β  =⇒ α⊗ must first decomposethe hypothesis before the goal:

α =⇒ α   init β  =⇒   R

α, β  =⇒ α ⊗   ⊗R

α ⊗ β  =⇒ α ⊗   ⊗L

In general, the derivation of any synchronous formula can require decompos-ing hypotheses before the synchronous goal. To allow synchronous types andstill have a notion of canonical forms, CLF introduces a monadic type construc-tor,  {}, which will be used to mark all occurrences of synchronous types; thusCLF disallows terms of type  α⊗β , but does allow terms of type  {α⊗β }. By re-stricting synchronous types to only occur within the monadic type constructor,CLF regains a notion of canonical forms where a term of type  {α} has the shape{let  { p1} =  e1   in   . . .   let  { pn} =  en   in  e} where pi is a variable pattern whose

shape is determined by the type of  ei, a term representing the decomposition of some monadic hypothesis;  e   is a term of type  α; and we have overloaded  {}  toalso be a term constructor.

In other words canonical terms of monadic synchronous type are a (possiblyempty) sequence of operations on hypotheses followed by a term of appropri-ate type; this exactly captures the need to decompose hypotheses before thegoal when deriving a synchronous formula. The preceding notion of canonicalform for CLF places no constraints on the sequence of hypothesis operationsexecuted before returning to the goal. In this way synchronous types resembleatomic types; there is not necessarily a single term for a given atomic type.However, restricting the occurrence of synchronous types to being inside themonad, effectively encapsulates the uncertainty in an otherwise canonical term.

Operationally, the derivation of a monadic goal entails a switch from back-ward chaining, goal directed proof search to forward chaining, context directed2

proof search; after enough forward chaining has taken place, search can resumeits goal directed nature. Determining when to stop forward chaining is a mat-ter of proof search strategy; there is in general no way of deciding when to

2The derivation is driven by the current hypotheses, rather than by the current goal.

2

Page 3: From CLF to Lollimon

7/23/2019 From CLF to Lollimon

http://slidepdf.com/reader/full/from-clf-to-lollimon 3/6

stop, short of a global analysis which would be tantamount to finding a com-plete derivation. One strategy, which we have adopted in the implementation

of LolliMon, is to forward chain until the context reaches a fixpoint.The monadic formula constructor was intended to encapsulate synchronous

formulas, however its operational significance, i.e. switching from backwardchaining to forward chaining, makes it useful even in the absence of synchronousformulas. In particular, the monad can be used, in conjunction with embed-ded implications, to syntactically separate distinct phases of a forward chainingcomputation [9].

Although there are no constraints placed on the sequence of forward chainingoperations in a monadic term, the order of any two independent operationsmay be switched without affecting the meaning of the term. Intuitively, theoperations on hypotheses all happen concurrently with an ordering of eventsonly forced by data dependencies and resource contention (when two hypothesesrequire the same linear hypothesis). Therefore the definition of equality for CLFallows for directly encoding true concurrency; the system cannot differentiatebetween two terms which only differ in the order of independent operations.Many concrete examples of CLF encodings are provided in the technical reportby Cervesato et al. [4].

The faithful operational translation of CLF’s concurrent equality would beto treat (monadic headed) hypotheses as concurrent processes, similar to rewrit-ing rules [6] or linear concurrent constraints [5], which are turned on during aforward chaining stage. The current prototype version of LolliMon does not at-tempt concurrent execution and instead randomizes the order in which it tries touse each monadic hypothesis. Additionally, true concurrency is approximatedby making the forward chaining phase of proof search committed choice. Whilethis approach does prevent the system from finding two equivalent proofs, it is

too restrictive and leads to incompleteness. We plan to explore truly concurrentexecution in the future.The implementation of LolliMon requires the interleaving of backtracking,

goal directed proof search with committed choice, forward chaining proof search.As already stated, a goal formula can have a monadic goal nested inside andcause backchaining search to switch to forward chaining search. Likewise, abackchaining derivation can be spawned in the middle of forward chaining whendecomposing an implication hypothesis. The arbitrary nesting of the two differ-ent search strategies poses interesting implementation challenges. The currentversion of LolliMon uses both success and failure continuations to manage con-trol flow. In order to interleave backtracking proof search with committed choicesearch, the failure continuation values are structured to reflect the nesting of backtracking and non-backtracking phases. Although this approach seems to

work reasonably well, it feels too ad hoc and we believe LolliMon’s operationalsemantics and control flow can be much more cleanly specified.

In order to check whether or not the context has reached a fixpoint, or hassaturated, we need to know whether a given formula is already in the context. Tokeep the saturation check tractable, the current implementation uses a versionof a discrimination tree to store the entire context; thus we can also make use of 

3

Page 4: From CLF to Lollimon

7/23/2019 From CLF to Lollimon

http://slidepdf.com/reader/full/from-clf-to-lollimon 4/6

indexing to more efficiently solve atomic goals. In order to match the depth firstProlog style search semantics, we sacrifice some amount of sharing to maintain

clause order information when storing backchaining program clauses; howevermonadic clauses, and clauses assumed during forward chaining, are always storedwith maximal sharing.

Although we can efficiently decide whether or not a given formula is alreadyour context, we do not yet have a good way of reaching context saturation,during forward chaining. Currently, our method of reaching saturation is to tryevery available formula until none produce a change; however, if any formulaproduces a change in the context during a pass, then we schedule another passin which every formula wiull be tried again. This is highly inefficient since manyformulas will independent of each other. We are currently working on addinginformation to our discrimination tree structure which will let us turn off abranch of the tree until some specific event happens which triggers the formulasin that branch to come back alive. We hope to be able to leverage standardtechniques from data flow analysis for logic programs and spreadsheets.

Finally, we end by noting that the current committed choice forward chainingstrategy of continuing until saturation, or a fixed point, is reached is not theonly, nor even the best strategy. We chose this strategy because it is relativelysimple for both the implementor and the user, and it approximates the CLFsemantics. However, the strategy is incomplete due to the side effects caused bylinear hypotheses as well as unification. We would like to explore other forwardchaining strategies. In particular a (non-deterministically) complete strategywould allow us to use LolliMon for model checking since a negative result wouldreally mean no proof exists.

References

[1] J.-M. Andreoli. Logic programming with focusing proofs in linear logic.Journal of Logic and Computation , 2(3):297–347, 1992.

[2] I. Cervesato, J. S. Hodas, and F. Pfenning. Efficient resource managementfor linear logic proof search.   Theoretical Computer Science , 232:133–163,2000. Revised version of paper in the Proceedings of the 5th InternationalWorkshop on Extensions of Logic Programming, Leipzig, Germany, March1996.

[3] I. Cervesato and F. Pfenning. A linear logical framework.   Information and Computation , 1999. To appear in the special issue with invited papers fromLICS’96, E. Clarke, editor.

[4] I. Cervesato, F. Pfenning, D. Walker, and K. Watkins. A concurrent logicalframework II: Examples and applications. Technical Report CMU-CS-02-102, Department of Computer Science, Carnegie Mellon University, 2002.Revised May 2003.

4

Page 5: From CLF to Lollimon

7/23/2019 From CLF to Lollimon

http://slidepdf.com/reader/full/from-clf-to-lollimon 5/6

[5] F. Fages, P. Ruet, and S. Soliman. Linear concurrent constraint program-ming: Operational and phase semantics.   Information and Computation ,

165(1):14–41, 2001.

[6] T. Fruhwirth. Theory and practice of constraint handling rules.  Journal of Logic Programming , 37(1–3):95–138, Oct. 1998. Special Issue on ConstraintLogic Programming.

[7] R. Harper, F. Honsell, and G. Plotkin. A framework for defining logics.Journal of the Association for Computing Machinery , 40(1):143–184, Jan.1993.

[8] J. S. Hodas and D. Miller. Logic programming in a fragment of intuitionis-tic linear logic.   Information and Computation , 110(2):327–365, 1994. Ex-tended abstract in the Proceedings of the Sixth Annual Symposium onLogic in Computer Science, Amsterdam, July 15–18, 1991.

[9] P. Lopez, F. Pfenning, J. Polakow, and K. Watkins. Monadic concurrentlinear logic programming. In  PPDP ’05: Proceedings of the 7th ACM SIG-PLAN international conference on Principles and practice of declarative programming , pages 35–46, New York, NY, USA, 2005. ACM Press.

[10] P. Lopez and J. Polakow. Implementing efficient resource managementfor linear logic programming. In F. Baader and A. Voronkov, editors,Eleventh International Conference on Logic for Programming Artificial In-telligence and Reasoning (LPAR’04), pages 528–543, Montevideo, Uruguay,Mar. 2005. Springer-Verlag LNAI 3452.

[11] D. Miller and G. Nadathur. Higher-order logic programming. In E. Shapiro,

editor, Proceedings of the Third International Logic Programming Confer-ence , pages 448–462, London, June 1986.

[12] D. Miller, G. Nadathur, F. Pfenning, and A. Scedrov. Uniform proofs asa foundation for logic programming.   Annals of Pure and Applied Logic ,51:125–157, 1991.

[13] F. Pfenning. Logic programming in the LF logical framework. In G. Huetand G. Plotkin, editors,   Logical Frameworks , pages 149–181. CambridgeUniversity Press, 1991.

[14] J. Polakow. Linearity constraints as bounded intervals in linear logic pro-gramming. Journal of Logic and Computation , 2006. To appear. Extendedversion of paper in Proceedings Workshop on Logics for Resources, Pro-

cesses, and Programs, Turku, Finland, July 2004.

[15] K. Watkins, I. Cervesato, F. Pfenning, and D. Walker. A concurrent logicalframework I: Judgments and properties. Technical Report CMU-CS-02-101, Department of Computer Science, Carnegie Mellon University, 2002.Revised May 2003.

5

Page 6: From CLF to Lollimon

7/23/2019 From CLF to Lollimon

http://slidepdf.com/reader/full/from-clf-to-lollimon 6/6

[16] K. Watkins, I. Cervesato, F. Pfenning, and D. Walker. A concurrent log-ical framework: The propositional fragment. In S. Berardi, M. Coppo,

and F. Damiani, editors,  Types for Proofs and Programs , pages 355–377.Springer-Verlag LNCS 3085, 2004. Revised selected papers from the  Third International Workshop on Types for Proofs and Programs , Torino, Italy,April 2003.

6