In the previous post, we saw that free monads could be described in terms of initial algebras. We also saw that the free completely iterative monad could be constructed in a similar way, but using final coalgebras. That post completely skipped explaining what a completely iterative monad is, and why they are interesting. That will be the objective of the current post. Along the way, we shall encounter several familiar characters from previous posts, including ideal monads, free algebras, Eilenberg-Moore algebras and final coalgebra constructions.

## Solving Equations

Solving equations is a central topic in algebra, that we deal with right from the mathematics we are taught in school. Fortunately, for this post, we shall be interested in solving more interest classes of equations than we typically encounter in those early days. For a signature $\Sigma$, and $\Sigma$-algebra $A$, we are interested in solving systems of equations of the form

• $x_1 = t_1$
• $x_2 = t_2$

where each $t_i$ is a $\Sigma$-term over $X \uplus A$. The idea is that the $t_i$ are terms that can refer to the unknowns, and also to some constant elements in $A$. The first thing to note about such systems of equations is that we can define infinite cycles.

Example: Let $\Sigma = \{ \triangleleft, \triangleright \}$ be a signature with two binary operation symbols, and a $A$ a $\Sigma$-algebra . Consider the two mutually recursive equations:

1. $x_1 = a_1 \triangleleft x_2$.
2. $x_2 = a_2 \triangleright x_1$.

These equations contain the three main components we are interested in, variables expressing unknowns $x_1,x_2$, operation symbols $\triangleleft, \triangleright$, and constant value $a_1, a_2$ in the structure of interest. Intuitively, the solution to this system of equations should be the value of a “term” in

1. $x_1 = a_1 \triangleleft a_2 \triangleright a_1 \triangleleft a_2 \triangleright \ldots$.
2. $x_2 = a_2 \triangleright a_1 \triangleleft a_2 \triangleright a_1 \triangleleft \ldots$

(Really we should bracket everything to the right, but that would hurt readability. We shall assume that all such terms implicitly bracket to the left.) Of course, these are infinite objects, and so there is no genuine term we can evaluate to define these values in this way.

In an ideal world, each such system of equations would have a unique solution. Clearly, the trivial equation $x = x$ can be solved by any element in the underlying algebra. To avoid our desire for unique solutions forcing all our algebras to become trivial, we must restrict the class of equations we can solve. Instead, we shall demand unique solutions for all systems of equations where the $t_i$ are not bare variables. These are referred to as guarded systems of equations.

The following example shows that we can actually consider simpler sets of guarded equations, and still retain the same expressive power.

Example: Using the same signature as the previous example, consider the following pair of equations:

1. $x_1 = a_1 \triangleleft a_2 \triangleleft x_2$.
2. $x_2 = a_3 \triangleright a_4 \triangleright x_1$.

The terms on the right hand side are more complex than the previous example, as they contain multiple operation symbols. By introducing further unknowns, we can describe a system of equations only using one operation in each term.

1. $x_1 = a_1 \triangleleft x'_1$.
2. $x'_1 = a_2 \triangleleft x_2$.
3. $x_2 = a_3 \triangleright x'_2$.
4. $x'_2 = a_4 \triangleright x_1$.

Assuming a unique solutions exists for both sets of equations, the values for $x_1$ and $x_2$ must coincide.

Our terms now involve only one operation symbol, and a mixture of variables and constants from $A$. We can go further in trying to standardise the form of our equations, by separating out the constants. Again, we add more unknowns, adjusting the previous set of equations as follows:

1. $x_1 = a_1 \triangleleft x'_1$.
2. $x'_1 = a_2 \triangleleft x_2$.
3. $x_2 = a_3 \triangleright x'_2$.
4. $x'_2 = a_4 \triangleright x_1$.
5. $x''_1 = a_1$.
6. $x''_2 = a_2$.
7. $x''_3 = a_3$.
8. $x''_4 = a_4$.

Assuming a unique solution exists for these equations, the values for $x_1$ and $x_2$ must coincide with those of the previous formulations. If we view the elements $a_i$ as extra constant symbols in our signature, this is simply a set of equations using guarded terms.

Let’s call a term flat if it contains only one operation symbol. The strategy in the previous example can be generalised to convert an arbitrary family of equations into an equivalent family involving only flat terms or constants. For a signature $\Sigma$, such a family of equations can be expressed as a function:

$e: X \rightarrow \Sigma(X) + A$

We are interested in algebras $(A, a : \Sigma(A) \rightarrow A)$ such that for every such $e$ there exists a unique $e^{\dagger} : X \rightarrow A$ such that:

$e^{\dagger} = [a \circ \Sigma e^{\dagger}, \mathsf{id}] \circ e$

A $\Sigma$-algebra for which such unique solutions exist is referred to as a completely iterative algebra. The adjective “completely” emphasises that we have solutions for arbitrary sets of equations. An iterative algebra is an algebra where there exist solutions for finite sets of equations.

Most of the algebraic objects that are encountered in ordinary mathematics are not completely iterative.

Example: For the natural numbers with addition, the system of equations:

1. $x_1 = x'_1 + x_1$.
2. $x'_1 = 1$.

has no solutions.

Example: For the subsets of the natural numbers, with binary unions, the system of equations:

1. $x_1 = x'_1 \cup x_1$.
2. $x'_1 = \emptyset$.

has multiple solutions. In fact, any subset of natural numbers will do for $x_1$. The issue here is that $\cup$ has a unit element.

We could also consider the equation $x = x \cup x$, this also has multiple solutions. The essential problem now is that $\cup$ is idempotent.

We could thinks of both of these examples as sneakily allowing us to circumvent the restriction to guarded terms.

The free algebra over $X$ is simply the collection of all terms of $X$, with formal syntactic operations. Term are finite object. The intuition from our first example is that in order to solve potentially mutually recursive equations, what we need is the set of “finite and infinite terms”.

As we have seen last time, the free algebra at $X$ is given by the initial algebra $(\mu(\Sigma + X), \mathsf{in})$, which we think of as a least fixed point. To add in the infinite terms, we instead consider the final coalgebra $(\nu(\Sigma + X), \mathsf{out})$. For an algebraic signature, the elements of $\nu(\Sigma + X)$ are trees (both finite and infinite) such that:

1. Each internal $n$-ary node is labelled by an $n$-ary operation symbol.
2. Each leaf is labelled by either a constant symbol, or an element of $X$.

Intuitively, the finite trees are the ordinary terms, and the infinite trees are the extra elements that allow us to solve genuinely mutually recursive systems of equations.

If we consider the full subcategory of $\mathsf{Alg}(\Sigma)$ consisting of the completely iterative algebras, and the corresponding restriction of the forgetful functor to $\mathsf{Set}$, then $(\nu(\Sigma + X), \mathsf{out}^{-1} \circ \kappa_1)$ is the free completely iterative algebra over $X$. Here $\kappa_1$ denotes the coproduct injection.

When all these final coalgebras exist, so we have a free completely iterative algebra functor, the adjunction induces a monad. This is the monad that we described as the free completely iterative monad in the previous post.

We’ve now recovered the monad known as the free completely iterative monad from the previous post, via an analysis of unique solutions for certain families of mutually recursive equations. What we still haven’t done is sort out the question of exactly what a completely iterative monad is. That is the problem we turn to now. Again, solutions of mutually recursive guarded equations will be important.

We will now remove the restriction to equations involving only flat terms, and consider solutions to systems of equation involving more general terms. Abstractly such a system of equations will be encoded as a function:

$e : X \rightarrow \mathbb{T}(X + Y)$

Here, $\mathbb{T}$ is some monad, $X$ is the object of unknowns, and $Y$ are a set of parameters. We will look for solutions to this system of equations in an Eilenberg-Moore algebra $(A, a)$. A solution for $e$, given an assignment for the parameters $f : Y \rightarrow A$ is a function $e^{\dagger} : X \rightarrow A$ such that:

$e^{\dagger} = a \circ \mathbb([e^{\dagger}, f]) \circ e$

We would like unique solutions to such families of equations. As we noted previously, equations such as $x = x$ will cause us problems. This presents the question of how to identify the guarded terms that aren’t just bare variables.

We previously encountered ideal monads, which informally are monads that “keep variables separate”. These are exactly the gadget that we need in order to talk about guarded equations abstractly. We recall an ideal monad is a monad $(\mathbb{T},\eta,\mu)$ such that:

1. There exists an endofunctor $\mathbb{T}^+$ with $\mathbb{T} = \mathsf{Id} + \mathbb{T}^+$.
2. The unit is the left coproduct injection.
3. There exists $\mu' : \mathbb{T}^+ \circ \mathbb{T} \Rightarrow \mathbb{T}^+$ such that $\mu \circ \kappa_2 \mathbb{T} = \kappa_2 \circ \mu'$.

A system of equations $e : X \rightarrow \mathbb{T}(X + Y)$ is said to be guarded if it factors through $[\kappa_2, \eta \circ \kappa_2] : \mathbb{T}'(X + Y) + Y \rightarrow \mathbb{T}(X + Y)$.

Finally, we are in a position to say what a completely iterative monad is!

A completely iterative monad is an ideal monad such that every guarded system of equations $e : X \rightarrow \mathbb{T}(X + Y)$ has a unique solution in the free algebra $(\mathbb{T}(Y), \mu_Y)$, with $\eta : Y \rightarrow \mathbb{T}(Y)$ the valuation for the parameters.

Example: Unsurprisingly, the free completely iterative monad, constructed using final coalgebras, is a completely iterative monad.

## Freedom!

We’re not quite done. We claimed that the final coalgebra construction yields the free completely iterative monad. You should always be suspicious of a claim that something is a free construction, if you’re unsure about the categories and functors involved. We now pin down these details.

Firstly, we admit to being a bit naughty in a previous post. We introduced ideal monads, but didn’t specify their morphisms. We need to address that omission first.

$(\mathbb{S},\eta,\mu,\mathbb{S}', \mu') \rightarrow (\mathbb{T},\eta,\mu,\mathbb{T}', \mu')$

is a monad morphism $\sigma : \mathbb{S} \rightarrow \mathbb{T}$ such that there exists a $\sigma' : \mathbb{S}' \Rightarrow \mathbb{T}'$ with:

$\sigma \circ \kappa_2 = \kappa_2 \circ \sigma'$

Intuitively, these a monad morphisms that don’t muddle up bare variables with guarded terms.

Let $\mathsf{CIM}(\mathcal{C})$ be the category of completely iterative monads on $\mathcal{C}$, and ideal monad morphisms between them. There is an obvious forgetful functor to the endofunctor category:

$\mathsf{CIM}(\mathcal{C}) \rightarrow [\mathcal{C}, \mathcal{C}]$

For an ideal monad $\mathbb{T}$, a natural transformation $\alpha : \Sigma \Rightarrow \mathbb{T}$ is ideal if it facts through the coproduct inclusion $\mathbb{T}' \Rightarrow \mathbb{T}'$.

A completely iterative monad $\mathbb{T}$ is free over function $\Sigma$ if there is a universal ideal natural transformation $\iota : \Sigma \Rightarrow \mathbb{T}$. That is, for every completely iterative monad $\mathbb{S}$ and ideal natural transformation $\alpha : \Sigma \Rightarrow \mathbb{T}$ there exists a unique ideal monad morphism $\hat{\alpha} : \mathbb{T} \Rightarrow \mathbb{S}$ such that

$\alpha = \hat{\alpha} \circ \iota$

You should be a bit suspicious at this point. This doesn’t look like a normal free object definition, in particular, the restriction to ideal natural transformations is a bit odd. This is a legitimate universal property, but possibly using the term free is at odds with the usual convention, relative to the forgetful functor above. In particular, as pointed out in the original paper, the existence of all such free objects does not imply the existence of an adjoint functor. Be careful when somebody on the internet tells you something is free!

## Wrapping Up

As usual, we have focussed on sets and functions, and conventionally algebraic examples as far as possible. This was for expository reasons, the theorems in this area work in much greater generality.

A good starting point for further information is “Completely Iterative Algebras and Completely Iterative Monads” by Milius. The paper has plentiful examples, and many more technical results and details than those we have sketched in this post. You will also find pointers to iterative monads and algebras in the bibliography of that paper.

## 3 thoughts on “Completely Iterative Monads”

1. Zhixuan Yang says:

Great post! I really enjoy reading your blog on Sunday evening. The recursive equation story in this post is so nice.

I noticed “completely iterative algebras” are the dual to Ralf Hinze et al’s “para-recursive coalgebras” in their paper Conjugate Hylomorphisms. Indeed, cofree para-recursive coalgebras are finite little brothers of the usual cofree coalgebras.

Moreover, the concepts of ideals monads and completely iterative monads can be dualised to comonads (coideal/para-recursive comonads?), and cofree para-recursive coalgebras indeed form an example. However, I’m still struggling to come up with an intuitive story for such comonads, dual to your recursive equation story. How interesting!

(Forgive me if my comment is posted multiple times. The reply area of WordPress doesn’t seem be working properly for me.)

Like