microLogic
Table of Contents
1 Introduction
This is a literate Clojure implementation of miniKanren, a logic programming system which can be embedded in functional programming languages. It's meant to be an introduction to the inner workings of miniKanren and of core.logic, helping you to understand more completely what's happening when you use those systems.
For maximum enjoyment, the reader should be familiar with Clojure, and in particular with protocols. In lieu of that, basic understanding of lisp syntax and a read through something like http://clojure-doc.org/articles/language/polymorphism.html should be sufficient.
This code was derived substantially from microKanren; for much more information, see the bibliography at the end.
In the (not entirely unlikely) event that you find errors in this document, please raise them as issues at https://github.com/mullr/micrologic/issues. There will be no power-of-two reward checks, merely my warmest of thanks.
2 Functional Core
2.1 Logic Variables
The purpose of a logic program is to take an expression with some unknowns in it and try to find values for those unknowns that make the expression true. Here's an example logic program in English with an unknown x:
Either x is the beginning of the list ["banana", "orange", "apple"], or x is the number 1.
(The two values of x that make this expression true are 1
and banana
)
These unknowns are called logic variables, or lvars for short. Since we're going to write our logic programs in Clojure, we need a way to represent them in that context.
(defrecord LVar [id]) (defn lvar [id] (LVar. id)) (defn lvar? [x] (instance? LVar x))
An lvar's id could be anything, but we use it like a serial number. The first lvar gets created with id 0, the next with id 1, and so on. There is some special syntax to help with that, which we'll see later on.
2.2 Substitutions and Walking
In the course of running a logic program, we'll be deciding on values for logic variables and testing those values against other parts of the program to see if it meets all the necessary criteria. The values are stored in a data structure called a substitution map (s-map for short).
A substitution map is just a regular clojure hash-map with lvars as its keys. Again consider this example:
Either x is the beginning of the list ["banana", "orange", "apple"], or x is the number 1.
{ (lvar 0) "banana" }
is one substitution map that will make this
expression true (as long as x is lvar 0).
There is a twist to the substitution map: in addition to regular values, lvars can be used in the value position as well. (This is called a triangular substitution) Here is an example which implies a triangular substitution:
x is the beginning of the list [y, "orange", "apple"]. y is the string "banana".
One substitution map which makes this expression true is
{ (lvar 0) (lvar 1)
(lvar 1) "banana" }
2.2.1 Adding entries
Although a substitution map is nothing but a regular clojure map, we need some utility functions to deal with some subtleties of its behavior. The first is for adding add new entries:
(defn add-substitution [s-map lvar value] (when s-map (assoc s-map lvar value)))
If you try to add something to a nil
substitution map, we just
return nil
. The nil
substitution map is output when there are no
substitutions that can make an expression true. We'll see where
this is important when we start to build up substitution maps.
2.2.2 Looking up values
The other thing we do with a substitution map is to look up what value
is associated with an lvar. It's just a hash-map, so why don't we just
use the get
function for this? Two reasons:
- If there is an lvar on the right-hand side, we need to look up that value, following such references until we get to a non-lvar.
- The operation we want is actually "give me the value of this object, given this substitution map". So if you call it with a value that's not an lvar, it should just give you that value back.
The function is called walk, since it follows lvar references on the right-hand side.
(extend-protocol IWalk LVar (walk [u s-map] (if-let [val (get s-map u)] (recur val s-map) u)) Object (walk [u s-map] u) nil (walk [u s-map] nil))
Here are some example walk
invocations:
> (def s-map { (lvar 0) (lvar 1) (lvar 1) "banana" }) > (walk (lvar 0) s-map) "banana" > (walk (lvar 1) s-map) "banana" > (walk "mango" s-map) "mango"
You might notice that it is easy to construct a substitution map with
a loop in it: (walk a {a b, b a})
will not terminate. Most implementations
would handle that with a check in add-substitution
, but we omit it here
for simplicity.
2.3 Unification
unify
is the way we build up substitution maps. Given two terms u
and v, and an existing substitution map s-map, unify
produces a
new substitution map with mappings that will make u and v equal.
A term is simply something you can pass to unify. This is indeed a circular definition. But really, you can pass anything you like as a term. It's just that some types of terms have specialized behavior. LVars do, for example. When we add support for sequences, they will as well. Anything not handled explicitly is still a valid term, but it's just treated as an opaque value.
Let's walk through some examples.
- First, if either u or v is an lvar, the unifying them with a
value just like assigning a value to that lvar in the substitution
map.
> (unify (lvar 0) "banana" {}) {(lvar 0) "banana"} > (unify "banana" (lvar 0) {}) {(lvar 0) "banana"}
- But what if the lvar already has a value? Then there are two
possibilities. Either the two values agree:
> (unify (lvar 0) "banana" {(lvar 0) "banana"}) {(lvar 0) "banana"}
Or they do not, leading to a contradiction:
> (unify (lvar 0) "banana" {(lvar 0) "mango"}) nil
- When there's something unrelated in the substitution map,
we'll just add to it:
> (unify (lvar 0) "banana" {(lvar 9) "squirrels"}) {(lvar 0) "banana" (lvar 9) "squirrels"}
- Finally, if we unify two variables that can't be resolved to a value
in the substitution map (these variables are called fresh) we add
a new entry.
> (def new-s (unify (lvar 1) (lvar 2) {(lvar 0) (lvar 1)})) {(lvar 0) (lvar 1) (lvar 1) (lvar 2)} > (walk (lvar 0) new-s) (lvar 2) > (walk (lvar 1) new-s) (lvar 2)
You can see from the example that unification is transitive; if a unifies with b, and b unifies with c, then a unifies with c.
That's it for the basic cases. They seem pretty trivial, but our unifier has one more nice feature: extensibility. By extending the IUnifyTerms protocol, you can add specialized unification behavior for any other data type as well. For example, once we add support for sequences below you'll be able to do this:
> (unify [(lvar 0) 2 3] ["banana" 2 3] {}) {(lvar 0) "banana"}
Here's the implementation of unify
:
(defn unify [u v s-map] (let [u (walk u s-map), v (walk v s-map) u-is-lvar (lvar? u), v-is-lvar (lvar? v)] (cond ;; Terms that walk to equal values always unify, but add nothing ;; to the substitution map (= u v) s-map ;; Unifying an lvar term with some other value creates a new entry in ;; the substitution map u-is-lvar (add-substitution s-map u v) v-is-lvar (add-substitution s-map v u) ;; Unifying two terms that walk to non-lvar values is delegated ;; to the polymorphic unify-terms function, from IUnifyTerms. :default (unify-terms u v s-map))))
Here are the basic IUnifyTerms definitions for Object and nil. If we get dispatched to either of these definitions, we know that neither u nor or v walks to an lvar, that the values aren't equal, and we aren't doing some kind of extended unification that's defined elsewhere. Thus, they must not unify.
(extend-protocol IUnifyTerms Object (unify-terms [u v s-map] nil) nil (unify-terms [u v s-map] nil))
2.4 Lazy Streams
As we've alluded to above, when you run a logic program you can get more than one result. MiniKanren's approach to this is really unique, and is one of the most interesting parts of the program.
2.4.1 Motivation
First, I'll try to convince you that we need to make a special lazy stream mechanism. We'll start with a more formal definition of what we mean by a "logic program":
THE DEFINITION OF A LOGIC PROGRAM
- In miniKanren, the output of a logic program is a set of substitutions that make that program true.
- A logic program is built by combining smaller logic programs using the conjunction (and) and disjunction (or) operators.
Now, let's consider this logic program in particular:
Define Natural(x) to be true when either: - x is the number 0 - x is greater than 0, and Natural(x - 1) is true
Clearly there are an infinite number of substitution maps that make
this program true - {(lvar 0) 0}
, {(lvar 0) 1}
, {(lvar 0) 2}
,
and so on. That is, it has an infinitely large set of solutions.
Here's another motivating example:
Define Diverge(x) to be true when Diverge(x) is true
As you can probably guess from the name, this is a program which has no valid output. But it's a program you can write all the same, and it will take forever to produce no output. The example is contrived, but you can get into such a situation surprisingly (distressingly) easily when writing actual programs.
So we now know that:
- There are logic programs with infinitely many solutions
- There are logic programs which diverge
And, we want to compose such programs together in a sane way, to make more complex programs. We need a way to fairly take results from programs with infinitely many solutions, and to continue to service a part of the program which might diverge while continuing to work on its more productive brethren.
This is the motivation for lazy streams.
2.4.2 Definition of a lazy stream
There are three kinds of lazy streams:
- Empty
- Mature (head realized)
- Immature (head unrealized)
A mature stream is one in which the first element of the stream has a definite value that has already been computed. An immature stream, on the other hand, does not yet have a computed head. Instead it has a function which you can use to compute a mature (or empty) stream when you need it.
You can manipulate a stream with these functions:
- Merge two of them together with
merge-streams
- Map a function over it with
mapcat-stream
, as long as that function itself produces streams. - Realize its head with
realize-stream-head
. This will transition it to either immature or empty, performing any necessary work along the way. - Convert it to a Clojure lazy sequence, with
stream-to-seq
2.4.3 Empty stream
(def empty-stream (reify IStream (merge-streams [this other-stream] other-stream) (mapcat-stream [this g] this) (realize-stream-head [this] this) (stream-to-seq [this] '())))
2.4.4 Mature streams (StreamNode)
A mature streams is represented by an instance of StreamNode. This is kind of like a linked list: head is the realized value that can be taken from the stream, and next is the stream which follows. But these streams are polymorphic; next isn't necessarily a StreamNode, just some other thing which extends the IStream protocol.
Note that if we have only StreamNodes (i.e. fully realized streams),
merge-streams
is equivalent to concat
and mapcat-stream
to
mapcat
.
(deftype StreamNode [head next] IStream (merge-streams [this other-stream] (StreamNode. head (merge-streams next other-stream))) (mapcat-stream [this g] (merge-streams (g head) (mapcat-stream next g))) (realize-stream-head [this] this) (stream-to-seq [this] (lazy-seq (cons head (stream-to-seq next))))) (defn make-stream [s] (StreamNode. s empty-stream))
2.4.5 Immature streams (IFn)
An immature stream is represented by a thunk (a function of no arguments).
Executing the thunk does one unit of work and returns a new
stream. This may in turn be a function, so you might have to keep
calling the returned function many times until you get down to a
realized value. This is exactly what realize-stream-head
does here,
by way of trampoline
.
- Merging
Merging is tricky - this is the code that interleaves values from two infinite streams, and what makes sure a branch that diverges doesn't tie up the whole program.
clojure.lang.IFn (merge-streams [this other-stream] #(merge-streams other-stream (this)))
Working from the inside out: we know that 'this' is a function because we're extending IStream onto IFn; calling it will perform one 'unit of work', whatever that might be. It returns a stream. Then we merge that stream with other-stream, the second parameter of this merge operation, but the order is reversed.
Finally, the above operation is all wrapped in a thunk. So we end up with a function that:
- performs the work for the first thing you constructed it with
- returns a new stream, putting the second thing you constructed it with at the head.
An imaginary repl session may make this clearer:
> (def a #(make-stream (+ 1 1))) > (def b #(stream (+ 10 20)) > (def s (merge-streams a b)) #(merge-streams #(make-stream (+ 10 20) (#(make-stream (+ 1 1)))) > (def s' (s)) #(merge-streams (StreamNode. 2 nil) (#(make-stream (+ 10 20)))) > (def s'' (s')) (StreamNode. 2 (StreamNode. 30))
- Mapping
mapcat-stream
is somewhat simpler.clojure.lang.IFn (mapcat-stream [this g] #(mapcat-stream (this) g))
The basic concept here is pretty straightforward: make a new thunk which, when executed later, will do some work and then mapcat
g
over the result. The thunk is used instead of just executing the function directly becausemapcat-stream
may be used inside a recursive function, which might diverge. This won't keep the function from diverging, but it will allow its execution to be interleaved with the rest of the program. - Definition
Here is the whole definition of IStream for functions:
(extend-protocol IStream clojure.lang.IFn (merge-streams [this other-stream] #(merge-streams other-stream (this))) (mapcat-stream [this g] #(mapcat-stream (this) g)) (realize-stream-head [this] (trampoline this)) (stream-to-seq [this] (stream-to-seq (realize-stream-head this))))
2.4.6 Differences from Clojure's lazy sequences
This is very similar to Clojure's lazy sequence facility; why not just use that? Well, there are cases where you are combining the results of two logic programs (with merge-streams), but one of them might diverge. (Remember the example above?) The lazy stream system shown here handles that case by giving each stream equal processing time, evaluating its thunks as they come. In this sense, lazy streams are a scheduling system for processes which may return multiple outputs.
2.5 Goals
I told you before that the output of a logic program is a set of substitutions. That was a small lie, which we will now refine into the actual implementation.
2.5.1 Interpreter state
A state is a record containing a substitution map s-map and the id of the next unused logic variable, next-id.
(defrecord State [s-map next-id]) (defn make-state [s-map next-id] (State. s-map next-id)) (def empty-state (make-state {} 0)) (defn with-s-map [state s-map] (assoc state :s-map s-map)) (defn with-next-id [state next-id] (assoc state :next-id next-id))
2.5.2 Goal functions
A goal is a function which, given a state, returns a stream of states. It encapsulates a logic program. Give it an input state, and it will give you one output state for each way it can make that goal true (or succeed in the parlance of miniKanren and The Reasoned Schemer), given the information in the input state.
2.5.3 Basic goal constructors
Rather than dealing with goals directly, we usually use goal constructors; given some parameter (usually a unification term or another goal), they will return a goal function which closes over it.
The most fundamental goal constructor is for unification. Given two
terms u and v, this creates a goal that will unify them. The goal
takes an existing state and returns (as a lazy stream) either a state
with bindings for the lvars in u and v (using unify
), or nothing
at all if u and v cannot be unified.
(defn === [u v] (fn unify-goal [{:keys [s-map] :as state}] (if-let [s-map' (unify u v s-map)] (make-stream (with-s-map state s-map')) empty-stream)))
call-fresh is a higher-order goal constructor that encapsulates the allocation of logic variables. You pass it your own goal-constructor, which takes as its single parameter the lvar that you want to use. call-fresh will make a new goal that allocates it for you, passing it in to your code. goal-constructor. For a more convenient way to do this, see the fresh macro below.
(defn call-fresh [goal-constructor] (fn fresh-goal [{:keys [s-map next-id] :as state}] (let [goal (goal-constructor (lvar next-id))] (goal (with-next-id state (inc next-id))))))
One way to combine smaller goals into a new one is with logical
disjunction, the 'or' operation. ldisj
which constructs a new goal
that succeeds whenever goal-1 or goal-2 succeeds. Another way to
look at this is that it interleaves the results of goal-1 and
goal-2, using merge-streams
.
(defn ldisj [goal-1 goal-2] (fn disj-goal [state] (merge-streams (goal-1 state) (goal-2 state))))
You can also combine goals with logical conjunction, the 'and'
operation. lconj
constructs a new goal that succeeds when both
goal-1 and goal-2 succeed. It does this by running goal-2 on each
output of goal-1. You can think of this as being like function
composition for functions with multiple outputs.
(defn lconj [goal-1 goal-2] (fn conj-goal [state] (mapcat-stream (goal-1 state) goal-2)))
3 Sugar
At this point, we have everything we need to do logic programming! But it's very inconvenient. Here we define some syntax to make the task more bearable.
3.1 Auxiliary macros
delay-goal will wrap the given goal in a new one which, when executed, simply returns a thunk that wraps the goal. Recall that goal functions return streams, and that a function is a valid kind of stream (an immature stream). The goal will finally be executed when the thunk is evaluated by realize-stream-head.
This is especially useful when defining recursive goals.
(defmacro delay-goal [goal] `(fn delayed-goal-outer [state#] (fn delayed-goal-inner [] (~goal state#))))
We also define extended versions of the ldisj
and lconj
functions. These handle multiple goal parameters, instead of just
two. They also automatically wraps each goal with delay-goal
, so you
don't need to worry about adding delays yourself.
(This does have a performance cost, but speed is not the point of this implementation)
(defmacro ldisj+ ([goal] `(delay-goal ~goal)) ([goal & goals] `(ldisj (delay-goal ~goal) (ldisj+ ~@goals))))
(defmacro lconj+ ([goal] `(delay-goal ~goal)) ([goal & goals] `(lconj (delay-goal ~goal) (lconj+ ~@goals))))
3.2 Reificiation
In miniKanren, reification refers to extracting the desired values from the stream of states you get as a result of executing a goal.
When there are logic variables in the output which were not assigned
a value, they are named _.0
, _.1
, and so on.
(defn reify-name [n] (symbol (str "_." n)))
reify-s-map creates a substitution map with reified values in it. It bases this on the supplied s-map parameter, but adds entries for each unknown that appears in the supplied term v, using values from reify-name.
(defn reify-s [v s-map] (reify-s* (walk v s-map) s-map)) (extend-protocol IReifySubstitution LVar (reify-s* [v s-map] (let [n (reify-name (count s-map))] (add-substitution s-map v n))) Object (reify-s* [v s-map] s-map) nil (reify-s* [v s-map] s-map))
deep-walk is like walk, but instead of simply returning any
non-lvar value, it will attempt to assign values to any lvars
embedded in it. For example, (deep-walk a {a (1 2 c), c 3)}
will
give (1 2 3)
. (once we have the sequence extensions, which are
defined below)
(defn deep-walk [v s-map] (deep-walk* (walk v s-map) s-map)) (extend-protocol IDeepWalk LVar (deep-walk* [v s-map] v) Object (deep-walk* [v s-map] v) nil (deep-walk* [v s-map] v))
Finally, we can define the actual reifier. Given a state, this will give you the reified value of the first lvar that was defined. If you're using the run macro defined below, this will be the first query variable.
(defn reify-state-first-var [{:keys [s-map]}] (let [v (deep-walk (lvar 0) s-map)] (deep-walk v (reify-s v {}))))
3.3 Programmer interface
Define the regular miniKanren conde
form, a disjunction of
conjunctions (an 'or' of 'ands'). Supposing that a and b are
lvars,
(conde [(=== a 1) (=== b 2)] [(=== a 7) (=== b 12)})
will produce two results: {a 1, b 2}
and {a 7, b 12}
. (If you've
never used miniKanren or core.logic before: 'e' means 'either'. Yes,
the naming is odd. But really, it's best to remember that it's an 'or
of ands' and to get used to the name)
(defmacro conde [& clauses] `(ldisj+ ~@(map (fn [clause] `(lconj+ ~@clause)) clauses)))
The fresh
macro allocates some new logic variables and makes them
available in its body. Really, it's a more convenient syntax for
call-fresh
. fresh
lets you declare multiple logic variables at
once, and it takes care of the function declaration mechanics for you.
(defmacro fresh [var-vec & clauses] (if (empty? var-vec) `(lconj+ ~@clauses) `(call-fresh (fn [~(first var-vec)] (fresh [~@(rest var-vec)] ~@clauses)))))
The body of fresh is a list of goals that is passed to lconj+
, a
logical 'and'. This goal:
(fresh [x y] (=== x 1) (=== y 2))
Will give one result, {x 1, y 2}
.
We define a small utility to invoke a goal with an empty state, as you might do when running a logic program from the top:
(defn call-empty-state [goal] (goal empty-state))
Finally, the run* macro gives us a lazy sequence of readable (reified) values.
(defmacro run* [fresh-var-vec & goals] `(->> (fresh [~@fresh-var-vec] ~@goals) call-empty-state stream-to-seq (map reify-state-first-var)))
The first parameter is a vector of logic variables which will be allocated and made available in the body; the first of these is reified and returned as a lazy sequence. This is called the 'query variable', and is often named 'q' by convention.
> (run* [q] (conde [(=== q 1)] [(=== q 7)])) (1 7)
If you only want a few values (for example, if you know there are an
infinite number of results), (run n [q] <<goals>>)
can do that. It's
equivalent to running take
on the output of run*.
(defmacro run [n fresh-var-vec & goals] `(take ~n (run* ~fresh-var-vec ~@goals)))
4 Sequences
The base logic programming system has unification and reification support for only lvars and basic values. But it's done in an extensible way. We'll now proceed to add sequence support to the base language.
4.1 Extending the core
When we unify sequences, we'd like to be able to indicate that an lvar should be associated with the tail of a sequence. In the scheme implementation, this is easy: by placing an lvar in the tail position of a linked list node (the cdr position of a cons node), the unification happens naturally when walking down the list.
Since Clojure disallows putting non-list items in linked-list cells (so-called 'improper lists'), we have to find another way to do it. core.logic solves this problem by defining its own LCons data type which does allow improper lists. We take a different approach here.
Whenever you want an improper list in the context of a logic program,
you can signify it with the 'dot' sigil. For example: [1 2 dot a]
.
This is meant to evoke the Scheme and Common LISP notation for
improper lists: (1 2 . 3)
. This will typically be transparent to the
user on the programming side, since such lists will be automatically
constructed by the 'conso' goal below.
(deftype Dot []) (def dot (Dot.))
There are times when the user may see an improper list as the result of a query. In this case, print the sigil as "."
(defmethod print-method Dot [l ^java.io.Writer w] (.write w "."))
The first extension point is the unifier; we need to define what it means to unify a sequence with something else.
The basic case is simple: if unifying a sequence with another sequence, then we should unify each element together. Recursively, this means unifying their first elements and then the rest of the sequence.
The unifier also needs to be aware of the dot-notation described
above. If either u or v is a sequence beginning with a dot, the
its second item must be an lvar which should be unified with the other
variable. For example, if we start with (=== [1 2 3] [1 . a])
, it
will recurse down to (=== [2 3] [. a])
. Detecting that, we can call
(=== [2 3] a)
, which gives the desired result.
(extend-protocol IUnifyTerms clojure.lang.Sequential (unify-terms [u v s] (cond (= dot (first u)) (unify (second u) v s) (= dot (first v)) (unify u (second v) s) (seq v) (->> s (unify (first u) (first v)) (unify (rest u) (rest v))))))
Next, we need to extend the reifier. When calling reify-s, we need to look for occurrences of logic variables inside of given parameter. This extends reify-s to make a recursive call on the first element of the sequence, then on the remaining elements.
(extend-protocol IReifySubstitution clojure.lang.Sequential (reify-s* [v s-map] (if (seq v) (reify-s (rest v) (reify-s (first v) s-map)) s-map)))
Deep-walk needs to be extended as well. As above, we we handle sequences beginning with 'dot' as a special case, go recursing back with the second item of the sequence. For any other sequences, we effectively map the deep-walk function over the sequence.
(extend-protocol IDeepWalk clojure.lang.Sequential (deep-walk* [v s-map] (cond (and (= dot (first v)) (sequential? (second v))) (deep-walk (second v) s-map) (seq v) (cons (deep-walk (first v) s-map) (deep-walk (rest v) s-map)) :default v)))
4.2 Sequence goals
Now we can define some user-level goals for sequences. First, conso says that out is the sequence with the head first and the tail rest.
(defn conso [first rest out] (if (lvar? rest) (=== [first dot rest] out) (=== (cons first rest) out)))
firsto simply says that first is the head of out.
(defn firsto [first out] (fresh [rest] (conso first rest out)))
And resto, likewise, says that rest is the tail of out.
(defn resto [rest out] (fresh [first] (conso first rest out)))
emptyo is a way to say that s must be an empty sequence.
(defn emptyo [s] (=== '() s))
appendo says that out is the result of appending the sequence parameters seq1 and seq1.
(defn appendo [seq1 seq2 out] (conde [(emptyo seq1) (=== seq2 out)] [(fresh [first rest rec] (conso first rest seq1) (conso first rec out) (appendo rest seq2 rec))]))
You can do some interesting things with appendo. For example, this program
will find all the ways you can append two lists to create [1 2 3 4 5]
:
> (run* [q] (fresh [x y] (appendo x y [1 2 3 4 5]) (=== q [x y]))) ((() (1 2 3 4 5)) ((1) (2 3 4 5)) ((1 2) (3 4 5)) ((1 2 3) (4 5)) ((1 2 3 4) (5)) ((1 2 3 4 5) ()))
5 Back matter
5.1 Bibliography
- The microKanren paper: http://webyrd.net/scheme-2013/papers/HemannMuKanren2013.pdf
- The microKanren scheme implementation: https://github.com/jasonhemann/microKanren
- core.logic, Clojure's real miniKanren implementation: https://github.com/clojure/core.logic
- Will Byrd's dissertation on miniKanren: https://github.com/webyrd/dissertation-single-spaced/raw/master/thesis.pdf
- Chris Grand on the lazy stream monad, and mplus in particular: http://clj-me.cgrand.net/2012/01/30/the-reasoned-scheduler/
- The Reasoned Schemer: http://mitpress.mit.edu/books/reasoned-schemer
5.2 Differences from microKanren
Polymorphic dispatch via protocols is used in place of cond
with
type checks, where possible.
Clojure-native data types are used where appropriate
- The substitution map is a clojure map instead of an alist
- There is an LVar defrecord, instead of using a vector of the lvar id.
- We have an explicit StreamNode data type, rather building on the built-in list type.
- Clojure doesn't allow improper lists; we emulate it with the 'dot' sigil.
Many names have changed to be compatible with clojure, and to be more approachable to those without knowledge of Scheme, monads or miniKanren.
uKanren | microLogic |
---|---|
c | id |
mplus | merge-streams |
bind | mapcat-stream |
pull | realize-stream-head |
mzero | empty-stream |
unit | stream |
Zzz | delay-goal |
conj | lconj |
disj | ldisj |
== | === |
5.3 Differences from core.logic
core.logic | microLogic |
---|---|
mplus | merge-streams |
bind | mapcat-stream |
pull | realize-stream-head |
mzero | empty-stream |
unit | stream |
== | === |