Relative Adjunctions

A large part of the richness of ordinary monad theory comes with the interplay between monads and adjunctions. Now we have encountered relative monads, a natural next step is to look for connections with adjunctions. That is the purpose of todays post.

Kleisli Triples and Adjunctions

As relative monads are formulated as a generalisation of Kleisli triples, it makes sense to use the relationship between adjunctions and that formulation of ordinary monads as a starting point, before attempting to generalise to the relative setting.

As we are favouring monads in Kleisli form, it is worth thinking about which formulation of adjunctions is going to fit comfortably. There are many equivalent choices, we can consider adjunctions in terms of:

  • Cup and cap natural transformations satisfying snake equations.
  • A pair of functors satisfying a certain unique fill-in condition.
  • A natural bijection between hom sets \mathcal{C}(L(A),B) \cong \mathcal{D}(A,R(B)).

As a monad in Kleisli form has an extension operation:

(-)^* : \mathcal{C}(A,\mathbb{T}(B)) \rightarrow \mathcal{C}(\mathbb{T}(A),\mathbb{T}(B)),

which is a mapping between hom sets, this seems to connect well with the natural bijection between hom sets formulation of adjunctions. So lets assume we have an adjunction, witnessed by a natural isomorphism:

\mathcal{C}(L(A),B) \xrightarrow{\theta_{A,B}} \mathcal{D}(A,R(B))

We wish to show that this induces a monad, using the extension form. The action on objects is:

X \mapsto RL(X)

For the unit map, we note that \theta_{A,L(A)} has type

\mathcal{C}(L(A),L(A)) \xrightarrow{\theta_{A,L(A)}} \mathcal{D}(A,R(L(A)))

so we take the components of our unit as

\eta_A = \theta_{A,L(A)}(\mathsf{id}).

Finally, we need to identify a suitable extension operation. The following composite has the right type:

\mathcal{D}(A,RL(B)) \xrightarrow{\theta^{-1}_{A,L(B)}} \mathcal{C}(L(A),L(B)) \xrightarrow{R_{A,B}} \mathcal{D}(RL(A),RL(B))

where R_{A,B} is the map that applies the functor R to a given hom set. More explicitly, we take:

f^* = R(\theta^{-1}_{A,B}(f)).

We have three axioms to check. Firstly, we calculate:

\eta^*_A = \theta(\mathsf{id})^* = R(\theta^{-1}(\theta(\mathsf{id})) = R(\mathsf{id}) = \mathsf{id}.


f^* \cdot \eta_A = R(\theta^{-1}(f)) \cdot \theta(\mathsf{id}) =\theta(\theta^{-1}(f) \cdot \mathsf{id}) = \theta(\theta^{-1}(f)) = f,

where we use the naturality of \theta in the only non-trivial step.


g^* \cdot f^* = R(\theta^{-1}(g)) \cdot R(\theta^{-1}(f)) = R(\theta^{-1}(g) \cdot \theta^{-1}(f)) = R(\theta^{-1}(R(\theta^{-1}(g)) \cdot f)) = R(\theta^{-1}(g^* \cdot f) = (g^* \cdot f)^*

Again, the only non-trivial step uses naturality. With this proof for ordinary monads under our belts, we’re ready to look at relative monads.

Relative adjunctions

We now want to tweak things to try and recover a J-relative monad, from a similar situation to before. Previously, we constructed the unit \eta_A as the image of the identity under a map of type:

\mathcal{C}(L(A),L(A)) \rightarrow \mathcal{D}(A,RL(A))

For a J-relative monad, the component of the unit at A has type J(A) \rightarrow RL(A). This suggests we require a natural isomorphism of type:

\mathcal{C}(L(A),B) \xrightarrow{\theta_{A,B}} \mathcal{D}(J(A),R(B))

where J : \mathcal{J} \rightarrow \mathcal{D}, L : \mathcal{J} \rightarrow \mathcal{C} and R : \mathcal{C} \rightarrow \mathcal{D}. If we have such an isomorphism, we can follow an almost identical recipe to before to form a J-relative monad with:

  • Object map T(X) = RL(X).
  • Unit components \eta_A = \theta(\mathsf{id}) : J(A) \rightarrow RL(A).
  • Extension operation f^* = R(\theta^{-1}(f)).

The proofs of the three axioms are identical to the previous ones, up to trivial changes in type information.

Our calculations lead us to consider isomorphisms of hom sets of the form:

\mathcal{C}(L(A),B) \cong \mathcal{D}(J(A),R(B))

If we have such an natural isomorphism we say that R has a left J-relative adjoint L. We could also consider the situation:

\mathcal{C}(L(A),J(B)) \cong \mathcal{D}(A,R(B))

in which case we say that L has a right J-relative adjoint R. Both situations are known as relative adjunctions.

Example: A functor F : \mathcal{C} \rightarrow \mathcal{D} is full and faithful if and only if \mathsf{Id} is its F-relative left adjoint, as that is the case when we have a natural isomorphism:

\mathcal{C}(A,B) \cong \mathcal{D}(F(A),F(B)

It follows that every full and faithful functor induces a relative monad.

We also have the obvious source of examples.

Example: Every ordinary adjunction is a \mathsf{Id}-relative adjunction.

We can also build new relative adjoints in a routine way.

Example: If we have relative adjunction:

\mathcal{C}(L(A),B) \cong \mathcal{D}(J(A),R(B))

then for any F of appropriate type:

\mathcal{C}(L(F(A)), B) \cong \mathcal{D}(J(F(A)),R(B))

so L \circ F is J \circ F-relative left adjoint to R. Using the previous example, given an ordinary adjunction L \dashv R, we get relative adjunctions:

\mathcal{C}(L(F(A)),B) \cong \mathcal{D}(F(A),R(B))

so L \circ F is F-relative adjoint to R.

Some words of caution

We have seen that left J-relative adjoints yield J-relative monads. Unfortunately relative adjunctions are not as well behaved as ordinary adjunctions, for example:

  • The situation is asymmetric. Left J-relative adjoints induce J-relative monads but not comonads, and unit maps of type J \Rightarrow RL, but no counit. On the other hand, right J-relative adjoints induce J-relative comonads but not monads, and counit maps of type LR \Rightarrow J, but no unit.
  • A functor determines its J-relative left adjoint, but the J-relative left adjoint does not fully determine the right adjoint.
  • There is an equivalent formulation of relative adjunctions in terms of absolute Kan lifts. This might sound like it would lead to a tidier setting, more suitable for formal category theory. Unfortunately, it does not capture the right notion if we move to the setting of enriched category theory.


By examining the construction of Kleisli triples from ordinary adjunctions, we identified a variation on the notion of adjunction, that induces relative monads. We have seen a small number of examples of relative adjunctions so far. Readers of this blog are probably (hopefully) asking themselves the following questions:

Positive answers to these questions would provide stronger evidence that relative adjunctions are a reasonable concept. We shall explore these topics in future posts.

Further reading: Ulmers “Properties of Dense and Relative Adjoint Functors” is a good place to start for more background, and is surprisingly readable for an older category theory paper.

Acknowledgements: Nathanael Arkor helpfully pointed out a blunder in an earlier version of this post, where I had muddled up Kan extensions and lifts in a remark.

Relative Monads

In todays post, we move away from the fancy 2-categorical formal monad theory we have been exploring recently, and commence a new topic. Beginning in a fairly elementary manner, we consider a generalisation of the notion of monad which comes up frequently in practice; that of relative monads.

Does a monad have to be an endofunctor?

As we have seen many times, a monad consists of an endofunctor \mathbb{T} : \mathcal{C} \rightarrow \mathcal{C}, along with unit and multiplication natural transformations satisfying some equations. These equations say that this structure is a monoid in the endofunctor category [\mathcal{C},\mathcal{C}], with respect to the composition monoidal structure.

We will now consider how we can generalise the notion of monad so that \mathbb{T} can be an arbitrary functor rather than just an endofunctor. Lets explore how we might try and make this work.

Firstly, we fix an arbitrary functor \mathbb{T} : \mathcal{C} \rightarrow \mathcal{D}. As a first step to viewing \mathbb{T} as the functor component of a monad, we would like a suitable generalisation of unit for this structure. For an ordinary monad, this would be a natural transformation:

\eta : \mathsf{Id} \Rightarrow \mathbb{T}

Unfortunately, this wont type check now that \mathbb{T} has more general type, as natural transformations must go between a pair of functors of the same type. To fix this problem, we shall introduce a second functor J : \mathcal{C} \rightarrow \mathcal{D}, and take our unit to be of type:

\eta : J \Rightarrow \mathbb{T}

The functor J is an extra parameter that didn’t appear in the definition of a conventional monad, so we are heading towards a definition that is relative to this choice of functor J. So maybe we’re looking for monoids in the functor category [\mathcal{C}, \mathcal{D}]?

To see if this is the case, we also need to consider the type of the multiplication natural transformation. For an ordinary monad, this has type:

\mathbb{T} \circ \mathbb{T} \Rightarrow \mathbb{T}.

Here we hit on a harder problem than before, as we cannot compose \mathbb{T} with itself, as the domain and codomain don’t match now we’ve removed the restriction to working with endofunctors. What to do?

We seem to have reached a bit of a dead end, so lets try addressing the same problem from a different direction. We have seen an alternative definition of monads before, using the extension or Kleisli form. Recall that this consists of three pieces of data:

  1. A function of type \mathbb{T} : \mathsf{obj}(\mathcal{C}) \rightarrow \mathsf{obj}(\mathcal{C}).
  2. An \mathsf{obj}(\mathcal{C})-indexed family of unit morphisms \eta_A : A \rightarrow \mathbb{T}(A).
  3. A family of extension operations on morphisms (-)^* : \mathcal{C}(A, \mathbb{T}(B)) \rightarrow \mathcal{C}(\mathbb{T}(A),\mathbb{T}(B)), indexed by pairs of \mathcal{C} objects.

We also require the following three axioms to hold:

  1. (\eta_A)^* = \mathsf{id}_A.
  2. f^* \cdot \eta_A = f.
  3. (g^* \cdot f^*) = (g^* \cdot f)^*.

Using what we learned from our first attempt to generalise to more general functors, we are going to try and adjust this definition relative to an endofunctor J : \mathcal{C} \rightarrow \mathcal{D}. So from our previous analysis, we will need a function

\mathbb{T} : \mathsf{obj}(\mathcal{C}) \rightarrow \mathsf{obj}(\mathcal{D})

and a family of unit maps

\eta_A : J(A) \rightarrow \mathbb{T}(A).

It remains to generalise the extension operations. Unlike with the definition of monad multiplication, the type of the extension operations for a monad do not involve any composites of the form \mathbb{T} \circ \mathbb{T}, so life is more straightforward. We simply adjust the types as follows:

(-)^* : \mathcal{D}(J(A),\mathbb{T}(B)) \rightarrow \mathcal{D}(\mathbb{T}(A),\mathbb{T}(B)).

The three equations above still type check, and so it is natural to require that they hold. This gives us our definition of J-relative monad, as a triple consisting of:

  1. A function \mathbb{T} : \mathsf{obj}(\mathcal{C}) \rightarrow \mathsf{obj}(\mathcal{D}).
  2. An \mathsf{obj}(\mathcal{C})-indexed family of unit morphisms \eta_A : J(A) \rightarrow \mathbb{T}(A).
  3. A family of extension operations on morphisms (-)^* : \mathcal{C}(J(A), \mathbb{T}(B)) \rightarrow \mathcal{C}(\mathbb{T}(A),\mathbb{T}(B)), indexed by pairs of \mathcal{C} objects.

These should satisfy the three equations above. As a first check that this definition is reasonable, we note the following implications.

  • The function \mathbb{T} extends to a functor of type \mathcal{C} \rightarrow \mathcal{D} with action \mathbb{T}(f) = (\eta_B \cdot f)^*.
  • Now \mathbb{T} is a functor, we can show that the family of unit morphism form a natural transformation \eta : J \Rightarrow \mathbb{T}.
  • The extension operations (-)^* : \mathcal{D}(J(A),\mathbb{T}(B)) \rightarrow \mathcal{D}(\mathbb{T}(A), \mathbb{T}(B)) are natural in both variables. That is: (\mathbb{T}(g) \cdot f \cdot J(e))^* = \mathbb{T}(g) \cdot f^* \cdot \mathbb{T}(e).

As further evidence that this definition is reasonable, it is a legitimate generalisation of ordinary monads.

Example: A \mathsf{Id}-relative monad is the same thing as an ordinary monad.

We also have the following source of many additional relative monads.

Example: Given an ordinary monad \mathbb{T} on category \mathcal{D}, and a functor J : \mathcal{C} \rightarrow \mathcal{D}, we get a J-relative monad with object map

A \mapsto \mathbb{T}(J(A)),

unit morphisms

\eta_{J(A)} : J(A) \rightarrow \mathbb{T}(J(A)).

The extension maps for the ordinary monad \mathbb{T} restrict to act on morphisms with codomains of the form J(A) for our relative monad.

The previous examples are often referred to as trivial, as they are built from ordinary monads. This underplays their importance, as in many cases it can be more natural to work with data in the form of a relative monad, rather than an ordinary monad, or to explore the relationship between the two. We will explore examples of this type in more detail in later posts, along with more general examples.


We have introduced a generalisation of the notion of monad, removing the restriction to endofunctors. I have to admit that when I initially encountered this definition, my first instinct was that the whole thing looked rather arbitrary, and a bit of a hack. I think this knee jerk opinion was wrong, and that actually relative monads are a natural and important concept. This revised opinion is based on the following evidence:

  1. Relative monads occur naturally in practice, and also yield a starting point for further useful generalisation.
  2. It is possible to develop a useful body of theory at the level of relative monads, so the abstraction is sufficiently powerful to do useful mathematical work.
  3. Even when other formulations are possible, it is often convenient to present certain mathematical constructions in the form of relative monads, making them a practical mathematical tool.

We will see examples illustrating the claims above in future posts.

Another instructive aspect of relative monads is how we benefit from having multiple formulations of the same mathematical abstraction. In this case, it turned out to be more convenient to generalise the extension or Kleisli form of a monad. By contrast, the monoid formulation, often preferred by mathematicians, proved more challenging. We will return to the question of whether relative monads can be seen as monoids in a later post.

Further reading: The standard starting point for relative monads is probably the paper “Monads need not be endofunctors” by Altenkirch, Chapman and Uustalu. It is worth finding the longer journal version if possible, as this contains more details and proofs than the shorter conference paper that typically crops up first on internet searches. For those wanting more theory and recent developments, I’d recommend reading the PhD thesis of Nathanael Arkor, which also contains a very enjoyable introduction to the topic.

Acknowledgements: I am grateful to Zhixuan Yang who identified a confusing mathematical typo in an earlier version of this post.

Adjunctions of Kleisli Type

In this short post, we will focus on a very simple but useful fact about adjunctions. This fact is well known to experts, and crops up regularly in the literature. It might be less familiar to those learning the subject, but that is the situation we aim to address.

Adjunctions of Kleisli Type

Recall that for a monad \mathbb{T} on some category \mathcal{C}, there is a functor from the base category to the Kleisli category:

F_{\mathbb{T}} : \mathcal{C} \rightarrow \mathcal{C}_{\mathbb{T}}

with right adjoint

U_{\mathbb{T}} : \mathcal{C}_{\mathbb{T}} \rightarrow \mathcal{C}

forming what we refer to as the Kleisli adjunction.

The question we are interested in is if we are given a functor

F : \mathcal{C} \rightarrow \mathcal{D},

when is it the left adjoint of the Kleisli adjunction? It is worth pausing to think for a second about what exactly this should mean. The details will become clearer as we proceed.

It is obvious we should require a right adjoint

U : \mathcal{D} \rightarrow \mathcal{C}.

Clearly this is not sufficient on its own, as we can easily find adjunctions F \dashv U as above, where the category \mathcal{D} is not equivalent to a Kleisli category. One obvious example would be to consider the Eilenberg-Moore adjunction, as the Eilenberg-Moore category \mathcal{C}^{\mathbb{T}} is rarely equivalent to the Kleisli category. (Although there are examples where this is the case).

So what else should we require to pin things down correctly? It turns out we don’t need very much at all, just to examine a bit more carefully some machinery we’ve seen before. Recall that the Kleisli adjunction is initial amongst adjunctions inducing a particular monad. That is, for monad \mathbb{T} = U \circ F, there is a comparison functor

K_{\mathbb{T}} : \mathcal{C}_{\mathbb{T}} \rightarrow \mathcal{D}

which is the unique homomorphism from the Kleisli adjunction to the adjunction F \dashv U. This functor is exactly the device we need to answer questions like this. This functor is always full and faithful, which is very good start. Furthermore, the action on objects is very simple:

K_{\mathbb{T}}(C) = F(C)

From here, we can make the following simple observations about the functor K_{\mathbb{T}}:

  • It is an isomorphism of categories if and only if it is bijective on objects, which is true if and only if F is bijective on objects.
  • It is an equivalence of categories if and only if it is essentially surjective on objects, which is true if and only if F is essentially surjective on objects.

Summing up, for a functor F : \mathcal{C} \rightarrow \mathcal{D}, with right adjoint U, inducing monad \mathbb{T} = U \circ F:

  • The category \mathcal{D} is canonically isomorphic to \mathcal{C}_{\mathbb{T}} if F is bijective on objects. This situation is sometimes referred to as an adjunction of Kleisli type.
  • The category \mathcal{D} is canonically equivalent to \mathcal{C}_{\mathbb{T}} if F is essentially surjective on objects.

The word canonically in the statements above simply means that the isomorphism / equivalence is the comparison functor.


This has been a somewhat technical post about a simple fact. I have run across this simple fact three times is the space of a month reading various papers, so it seemed an opportune time to publicise this somewhat “expert friendly” little fact.

A nice concise explanation of this situation is given in the answer to this MathOverFlow post given by Peter LeFanu Lumsdaine.

The Codensity Monad Transformer

In recent posts, we have encountered Kan extensions and the codensity monad, and explored some basic aspects of formal monad theory. This lead to a representation result, relating monad morphisms with domain a codensity monad, and monad actions. Todays post will continue those themes, yielding yet another approach to building and moving monads around.

Transferring Monads

We will work in terms of formal monad theory, in a 2-category \mathcal{C}. This is primarily to show that very little structure is actually need for the result to go through, and will require no complicated background. Readers that want to keep things a bit more concrete can think of \mathcal{C} as the 2-category of categories, functors and natural transformations.

If we have a monad (\mathbb{T}, \eta, \mu) on some 0-cell \mathcal{A}, and a 1-cell

\mathcal{A} \xrightarrow{F} \mathcal{B},

the question we would like to solve can we transfer the monad \mathbb{T} along F, to give a monad on \mathcal{B}? Obviously this is a rather imprecise specification, lets explore some possibilities.

We could assume that the right extension of F along itself exists:

\mathsf{Ran}_F(F) : \mathcal{B} \rightarrow \mathcal{B},

and this will induce a codensity monad on \mathcal{B}. This isn’t really in the spirit of the question though, as we have completely ignored the monad \mathbb{T} in our construction. So we consider this approach to be unsatisfactory, as would be other trivial solutions, such as simply taking the identity monad on \mathcal{B}.

To get the monad \mathbb{T} into the game, we modify our previous plan, and consider the right extension of F \circ \mathbb{T} along F.

\mathsf{Ran}_F(F \circ \mathbb{T}) : \mathcal{B} \rightarrow \mathcal{B}.

It turns out that this 1-cell carries a unique monad structure such that the universal 2-cell

\mathsf{run} : \mathsf{Ran}_{F}(F \circ \mathbb{T}) \circ F \Rightarrow F \circ \mathbb{T}

is an Eilenberg-Moore law. Our aim now is to use this extra criterion to deduce the required unit and multiplication.

The universal property of the required right extension can equivalently be written for all \alpha H \circ F \Rightarrow F \circ \mathbb{T} and \beta : H \Rightarrow \mathsf{Ran}_F(F \circ \mathbb{T}):

\alpha = \mathsf{run} \cdot (\beta \circ F) \;\Leftrightarrow\;  \hat{\alpha} = \beta

This formulation is useful as the two axioms of an Eilenberg-Moore law require that a pair of equations must hold, and the equivalence allows us to deduce further equations. Defining \mathbb{S} :=  \mathsf{Ran}_{F}(F \circ \mathbb{T}) to simplify notation, we find that:

  1. Plugging in the unit preservation axiom into the left hand side of the right extension universal property, the unit of the monad on \mathcal{B} must be \widehat{(F \circ \eta)} : \mathsf{Id} \Rightarrow \mathbb{S}.
  2. Plugging in the multiplication preservation axiom into the left hand side of the right extension universal property, we find that the multiplication must be \widehat{\mu \cdot (\mathsf{run} \circ \mathbb{S}) \cdot (\mathbb{S} \circ \mathsf{run})} : \mathbb{S} \circ \mathbb{S} \Rightarrow \mathbb{S}.

So there are unique choices of potential unit and multiplication for the resulting monad. It is then a routine calculation using the properties of extensions and the monad \mathbb{T} to show that the monad axioms are satisfied.

So we have found a construction that given a monad on a 0-cell \mathcal{A}, and a 1-cell F : \mathcal{A} \rightarrow \mathcal{B}, such that the right extension \mathsf{Ran}_F(F \circ \mathbb{T}) exists, there is a canonical monad induced on \mathcal{B}. We refer to this construction as the codensity monad transformer.

Example: If we take the monad \mathbb{T} in this construction to be the identity monad, we recover ordinary codensity monad as a special case.

We saw a similar construction, building a new monad from a parameter monad and some additional data, in a an earlier post. There, given a monad \mathbb{T} on \mathcal{A}, and an adjunction L \dashv R : \mathcal{A} \rightarrow \mathcal{B}, the composite R \circ \mathbb{T} \circ L is a monad on \mathcal{B}. This result is known as Huber’s construction.


This result is mentioned in Street’s “Formal Theory of Monads”, although not as a theorem in that paper. Presumably the result was well known before that, although I am currently unaware of an original source, and would welcome further information.

As we worked formally in terms of an arbitrary 2-category, we see that the codensity monad transformer does not hinge on anything specific to categories, functors and natural transformations, or specific right Kan extension constructions in terms of certain limits. From this perspective, once you are comfortable with it, the abstraction of working formally actually simplifies the analysis as it removes irrelevant structure and other distracting details.

Further reading: The calculations to verify this result might be somewhat intimidating to readers new to (or less comfortable with) Kan extensions, although they are in principle quite simple. I would recommend using the graphical notation introduced in my co-author Ralf Hinze’s paper “Kan Extensions for Program Optimisation Or: Art and Dan Explain an Old Trick”. This notation might take a little bit of getting used to, but once digested I find it takes a lot of the pain out of reasoning with Kan extensions.

Monad maps and Algebras Redux

In a previous post, we noted that for a \mathsf{Set} monad \mathbb{T}, and set A, there is a bijective correspondence between:

  1. Eilenberg-Moore algebras for the monad \mathbb{T} with underlying set A.
  2. Monad morphisms of type \mathbb{T} \rightarrow ((-) \Rightarrow A) \Rightarrow A with codomain the continuation monad induced by A.

We also briefly remarked that this observation can be generalised beyond the category \mathsf{Set}. In this post, using some of the ideas we have learned in recent posts, we are going to explore relationships of this type between algebras and morphisms of monads in more detail.


We previously encountered enriched monads. Recall that for a monoidal category \mathcal{V}, a \mathcal{V}-enriched category \mathcal{C} has \mathcal{C}(A,B) an object in \mathcal{V}, rather than a mere set. We can also defined \mathcal{V}-enriched functors and natural transformations which respect this structure.

Example: The category \mathbf{Pre} has objects preorders, and morphisms monotone maps. This category has products, and so can be seen as a monoidal category with respect to this structure. A \mathbf{Pre}-enriched category is a category in which:

  1. The collection of morphisms of type A \rightarrow B is a preorder.
  2. Composition of morphisms is monotone in both arguments. That is f_1 \leq f_2 and g_1 \leq g_2 implies that g_1 \circ f_1 \leq g_2 \circ f_2.

A \mathbf{Pre}-functor a functor which is also monotone in its action on morphisms.

If a category \mathcal{V} s monoidal closed, this category is itself a \mathcal{V}-category, with hom object \mathcal{V}(A,B) the exponential A \multimap B. This is usually phrased as \mathcal{V} being enriched over itself.

Example: The category \mathbf{Pre} is Cartesian closed, and therefore is enriched over itself. The exponential A \Rightarrow B in \mathbf{Pre} is simply the collection of morphisms of that type, with the pointwise order on monotone functions.

If our base of enrichment \mathcal{V} is symmetric monoidal closed, then for a \mathcal{V}-category \mathcal{C} we can define the opposite category \mathcal{C}^{op} by setting \mathcal{C}^{op}(A,B) = \mathcal{C}(B,A), and defining the composition morphism of type

\mathcal{C}^{op}(B,C) \otimes \mathcal{C}^{op}(A,B) \rightarrow \mathcal{C}^{op}(A,C)

as the composite

\mathcal{C}(C,B) \otimes \mathcal{C}(B,A) \xrightarrow{\sigma} \mathcal{C}(B,A) \otimes \mathcal{C}(C,B) \xrightarrow{m_{C,B,A}} \mathcal{C}(C,A)

where \sigma is the symmetry, and m_{C,B,A} is the composition morphism in \mathcal{C}.

With these technical preliminaries in place, for a symmetric monoidal category \mathcal{V}, seen as enriched over itself, and \mathcal{V}-object A, there is a \mathcal{V}-functor$:

(-) \multimap A : \mathcal{V}^{op} \rightarrow \mathcal{V}

As in the unenriched case, this functor is adjoint to itself, and so induces continuation or double dualisation \mathcal{V}-enriched monad ((-) \multimap A) \multimap A. We then get a generalisation of the previous result, that for any other \mathcal{V}-monad \mathbb{T}, and \mathcal{V}-object A there is a bijective correspondence between:

  1. Eilenberg-Moore algebras for the monad \mathbb{T}, with underlying object A.
  2. Monad maps \mathcal{T} \rightarrow ((-) \multimap A) \multimap A.

So the previous result generalises to enriched category theory. Note that the assumptions are unfortunately quite restrictive, as we can only consider monads on the base of enrichment itself. This is the enriched version of the fact the ordinary category theory result only applied to \mathsf{Set} monads. This seems frustrating, so it is natural to ask if we can do any better.

Codensity and Formality

We previously encountered codensity monads. These were built using right Kan extensions. For a functor F : \mathcal{A} \rightarrow \mathcal{B}, the codensity monad induced by F has endofunctor \mathsf{Ran}_F(F) : \mathcal{B} \rightarrow \mathcal{B}. We will write \langle F,F \rangle for this codensity monad.

With any unfamiliar construction, it is natural to consider what it does in some simple cases. One obvious choice is to fix a set A, and consider the codensity monad induced by the functor A : 1 \rightarrow \mathsf{Set} that picks out that object. This will induce a monad on \mathsf{Set}. Using some standard tools for calculating right Kan extensions in \mathsf{Set}, we find something familiar. \langle A,A \rangle is the continuation monad induced by A.

We can therefore rephrase the original result that motivated our investigations as saying for monad \mathbb{T} on \mathsf{Set}, there is a bijective correspondence between:

  1. Eilenberg-Moore algebras for \mathbb{T}, with underlying set A.
  2. Monad morphisms \mathbb{T} \rightarrow \langle A,A \rangle , with codomain the codensity monad induced by the functor A : 1 \rightarrow \mathsf{Set}.

So far, we don’t have anything new, except a shift in viewpoint from emphasising the continuation monad, to a codensity based perspective. It is natural to then consider whether there is anything special about the category \mathsf{Set} in this observation, and in fact there isn’t. So we can make a much stronger statement.

For a monad \mathbb{T} : \mathcal{C} \rightarrow \mathcal{C} and \mathcal{C}-object A, assuming the required right Kan extensions exist, there is a bijective correspondence between:

  1. Eilenberg-Moore algebras for \mathbb{T}, with underlying object A.
  2. Monad morphisms \mathbb{T} \rightarrow \langle A,A \rangle , with codomain the codensity monad induced by the functor A : 1 \rightarrow \mathcal{C}.

We can go further still, and ask if there’s anything special about the fact we’re doing category theory? Can we use the formal category theory perspective we’ve introduced in recent posts to get a more general result?

To do so, we must address a couple of wrinkles:

  1. We need to abstract the notion of right Kan extension to an arbitrary 2-category. This is routine, we just require the same universal property, but now of abstract 0,1 and 2-cells, rather than categories, functors and natural transformations. The resulting structures are usually called a right extensions.
  2. We have been making heavy use of the terminal category to pick out objects, via functors of the form A : 1 \rightarrow \mathcal{C}. This may not make sense in an arbitrary 2-category.

To address the second point, we are looking to generalise from Eilenberg-Moore algebras built upon individual objects, to something more suitable in an arbitrary 2-category. We have already seen the required abstraction before, left monad actions, which solved a similar problem when we looked at Eilenberg-Moore objects.

Putting this all together, we get a much more general statement. Let \mathcal{K} be a 2-category, and \mathbb{T} : \mathcal{C} \rightarrow \mathcal{C} a monad in \mathcal{K}. For a 1-cell A : \mathcal{B} \rightarrow \mathcal{C}, there is a bijective correspondence between:

  1. Left \mathbb{T} actions with underlying 1-cell A.
  2. Monad morphisms \mathbb{T} \rightarrow \langle A,A \rangle with codomain the codensity monad induced by A : \mathcal{B} \rightarrow \mathcal{C}.

So we learn that the previous result only hinged on the universal property of extensions, and nothing specific to category theory. The previous result about enriched monads arises as a special case in the 2-category of \mathcal{V}-categories.


We’ve explored a classic result that often crops up in many guises in the literature. By exploiting our new tools of codensity monads, formal category theory, and left monad actions, we abstracted away a lot of distracting clutter, and reached a clearer understanding of what was really making this bijection work.

Further reading: This post was motivated by some discussions in the excellent “A 2-Categories Companion” by Lack, which is well worth reading. The material about enriched dual dualisation monads is almost exclusively contained in Kock’s “On Double Dualization Monads”. Readers wanting more than the details we briefly sketched will find a thorough and very readable account in that paper.

Eilenberg-Moore Objects

This time, our aim is to understand the Eilenberg-Moore construction better. To do so, we will look for a generalisation of this construction for monads in an arbitrary 2-category \mathbf{K}. Working at this level of abstraction will provide results that apply in other settings, for example monads in the sense of enriched category theory. Even if all you care about is ordinary monads, this perspective forces us towards a universal property characterisation of the usual Eilenberg-Moore category, rather than our previous focus on an explicit construction.

Remark: Moving to a 2-categorical setting has a certain “intimidation factor”. It may help to recall that 0, 1 and 2 cells in \mathbf{K} are analogous to categories, functors and natural transformations in \mathbf{Cat}. In particular, 1 and 2-cells can be composed vertically and horizontally just as functors as natural transformations can. For readers aware of such issues, we will be working entirely with strict 2-categories and 2-functors, so there are no distracting coherence isomorphisms to worry about either.

Eilenberg-Moore locally

For a monad (\mathcal{C},\mathbb{T},\eta,\mu) in \mathbf{K}, where should we start looking for an analogue of the conventional Eilenberg-Moore construction? In that case, we built a well-chosen 0-cell in the 2-category \mathbf{Cat}, as we know these are simply categories, which we understand well. As we now work with an arbitrary 2-category \mathbf{K}, we’ve no idea what the 0-cells are, so we cannot persist with this plan directly.

A natural plan of attack is to reduce this problem to one we’re more familiar with. To do so, for a 0-cell \mathcal{X}, we consider the representable 2-functor

\mathbf{K}(\mathcal{X},-) : \mathbf{K} \rightarrow \mathbf{Cat}

This maps structures in \mathbf{K}, which we don’t understand very well, into \mathbf{Cat}, for which we have more understanding. As monads are defined in terms of equations between structures in a 2-category, they are preserved by 2-functors. Therefore


is an ordinary monad on the category \mathbf{K}(\mathcal{X},\mathcal{C}). Concretely, the functor part of this monad acts as:

U : \mathcal{X} \rightarrow \mathcal{C} \;\mapsto\; \mathbb{T} \circ U : \mathcal{X} \rightarrow \mathcal{C}

The components of the unit and multiplication at U : \mathcal{X} \rightarrow \mathcal{C} are \eta \circ U and \mu \circ U. We can then consider the Eilenberg-Moore category


Here we find something familiar, the Eilenberg-Moore algebras are left actions for the monad \mathbb{T} in \mathbf{K}. Of course we must generalise our original definition of left action from the 2-category \mathbf{Cat} to an arbitrary 2-category \mathbf{K}, but this is routine.

We now need to abstract a bit further, as we have defined this construction for an arbitrary 0-cell \mathcal{X}. To remove this arbitrary choice, we make this component a parameter, and consider the 2-functor:

\mathbf{K}(-,\mathcal{C})^{\mathbf{K}(-,\mathbb{T})} : \mathbf{K}^{op} \rightarrow \mathbf{Cat}

A standard next step for such a functor is to ask when it is representable, that is when there is a 0-cell \mathcal{C}^{\mathbb{T}} and 2-natural isomorphism:

\mathbf{K}(\mathcal{X}, \mathcal{C}^{\mathbb{T}}) \cong \mathbf{K}(\mathcal{X}, \mathcal{C})^{\mathbf{K}(\mathcal{X},\mathbb{T})}

If such a representation exists, \mathcal{C}^{\mathbb{T}} is referred to as the Eilenberg-Moore object for the monad \mathbb{T}, and establishes its universal property. We can think of this as internalising the collection of left monad actions, in a similar way to the relationship between hom sets and exponentials for example.

To unpick the universal property a bit further, write \mathbf{\theta} for the 2-isomorphism. We consider the image of the identity 1-cell under \mathbf{\theta} into:

\mathbf{K}(\mathcal{C}^{\mathbb{T}}, \mathcal{C})^{\mathbf{K}(\mathcal{C}^\mathbb{T},\mathbb{T})}

This is a left \mathbb{T}-action

\chi : \mathbb{T} \circ U^\mathbb{T} \Rightarrow U^\mathbb{T}

This action is said to be a universal left action, and fully defines \mathbf{\theta}. The action of \mathbf{\theta} on 1-cells is:

H : X \rightarrow \mathcal{C}^\mathbb{T} \;\mapsto\; \chi \circ H

and on 2-cells:

\sigma : H \Rightarrow H' \;\mapsto\; U^\mathbb{T} \circ \sigma

Unravelling these details is a worthwhile exercise if you want to develop understanding representable 2-functors and 2-naturality, generalising a similar argument ordinary representable functors.

Example: The 2-category \mathbf{Cat} has Eilenberg-Moore objects for every monad. This is also the case in the enriched category setting of \mathcal{V}-\mathbf{Cat} for a base of enrichment \mathcal{V} with (enough) equalizers. In both cases, the Eilenberg-Moore objects are given by the usual construction of the Eilenberg-Moore category. The universal left action is given by

U^\mathbb{T} \circ \epsilon : \mathbb{T} \circ U^{\mathbb{T}} \Rightarrow U^\mathbb{T}

where \epsilon is the counit of the free / forgetful adjunction for the Eilenberg-Moore category.

That Eilenberg-Moore objects in the settings of ordinary and enriched category theory are the usual constructions is encouraging.

As further evidence of the usefulness of this perspective, if a monad \mathbb{T} has an Eilenberg-Moore object, then other aspects of monad theory can be developed directly from the universal property. Of immediate interest are:

  1. There is a 1-cell F^\mathbb{T} : \mathcal{C} \rightarrow \mathcal{C}^\mathbb{T} such that \mathbb{T} is induced by an adjunction F^\mathbb{T} \dashv  U^\mathbb{T} in \mathbf{K}.
  2. There is a unique comparison 1-cell in \mathbf{K} exhibiting this adjunction as the terminal such adjunction.

We avoid spelling out the details, but they follow fairly directly from the universal property, and details can be found in Street’s paper.

Eilenberg-Moore globally

In an earlier post, we introduced the 2-category \mathbf{\mathsf{Mnd}}(\mathbf{K}). Sketching the details:

  1. 0-cells monads in \mathbf{K}.
  2. 1-cells Eilenberg-Moore laws between a pair of monads.
  3. 2-cells are 2-cells from \mathbf{K} which suitably commute with the Eilenberg-Moore laws.

The full details can be found in an earlier discussion about distributive laws.

There is a 2-functor:

\mathbf{K} \rightarrow \mathbf{\mathsf{Mnd}}(\mathbf{K})

sending a 0-cell to its corresponding identity monad. This functor has a right 2-adjoint \mathbf{\mathsf{Alg}} exactly when there is a 2-natural isomorphism:

\mathbf{K}(\mathcal{X}, \mathbf{\mathsf{Alg}}(\mathcal{C}, \mathbb{T})) \cong \mathbf{\mathsf{Mnd}}(\mathbf{K})((\mathcal{C}, \mathsf{Id}), (\mathcal{C}, \mathbb{T}))

This doesn’t immediately look familiar, but if we unravel the definitions, we find that Eilenberg-Moore laws of type:

\mathbb{T} \circ H \Rightarrow \mathbb{H} \circ \mathsf{Id}

are the same thing as left \mathbb{T}-actions, and so we are simply requiring the representability discussed in the previous section globally. Therefore, this right adjoint exists exactly when there is an Eilenberg-Moore object for every monad at \mathbf{K}. In this case, we say that \mathbf{K} admits the construction of algebras.

Intuitively, we might have anticipated that Eilenberg-Moore objects arise as part of an adjunction, as they are involved in an extremal resolution of monads.

Example: \mathbf{Cat} admits the construction of algebras. This extends to the enriched setting, where \mathcal{V}-\mathbf{Cat} for a sufficiently complete base of enrichment.


For our 2-category \mathbf{K}, we can consider its various duals. Eilenberg-Moore objects in \mathbf{K}^{op}, where we reverse 1-cells are what are known as Kleisli objects. These appropriately generalise the Kleisli construction to a 2-categorical setting, and the resulting universal property of Kleisli objects involves a universal right monad action. The duals \mathbf{K}^{co} and \mathbf{K}^{coop}, where we reverse 2-cells, recover the corresponding theory for comonads, involving what might be called universal left and right co-actions of a comonad.

Example: \mathbf{Cat} admits the construction of algebras for all 4 of its duals, given by the usual Eilenberg-Moore and Kleisli constructions for monads and comonads.


By considering the 2-categorical generalisation of the Eilenberg-Moore construction, we have encountered further aspects of left monad actions:

  1. They are the appropriate 2-categorical generalisation of ordinary Eilenberg-Moore algebras.
  2. They are themselves algebras for a suitably defined monad.
  3. They are also a special case of Eilenberg-Moore laws.
  4. Left and right monad actions are fundamental to giving universal properties for the Eilenberg-Moore and Kleisli constructions.

It is beautiful that the theory can be developed at this level of abstraction, and that the results for the Kleisli construction and comonads fall out via duality, providing a deeper mathematical relationship between all these constructions.

Further reading: The ideas in this post originate in Street’s wonderful paper “Formal Theory of Monads”. There are lots of other interesting ideas in this paper, and I strongly recommend reading it. Also useful for background is Kelly and Street “Review of the elements of 2-categories”, and Lack’s “2-categories companion”. Other potential sources on Eilenberg-Moore objects are the nlab, and MacDonald and Sobral’s “Aspects of monads” chapter of “Categorical Foundations” edited by Pedicchio and Tholen.

Acknowledgements: This post was prompted by questions and suggestions on mastodon by users and I am very grateful for these comments, as they pointed to some gaps in my understanding, and encouraged me to explore further.

Monad Actions

Previously, we have encountered various structures involving monads, such as Eilenberg-Moore algebras, Kleisli and Eilenberg-Moore laws, and distributive laws. In this post, we encounter another such structure, monad actions. These are less frequently discussed, but useful and relatively simple constructions. Conceptually, they sit somewhere between Eilenberg-Moore algebras and distributive laws.

Introducing actions

For a monad (\mathbb{M} : \mathcal{D} \rightarrow \mathcal{D}, \eta, \mu), a left action of \mathbb{M} on a functor A : \mathcal{C} \rightarrow \mathcal{D} is a natural transformation:

\alpha : M \circ A \Rightarrow A

such that the following two axioms hold:

  1. The unit axiom: \alpha \cdot (\eta \circ A) = \mathsf{id}_A
  2. The multiplication axiom: \alpha \cdot (\mu  \circ A) = \alpha \cdot (\mathbb{M} \circ \alpha)

Actions are also sometimes called modules in the literature, although there is some inconsistency in the use of this term, so we shall avoid it.

Remark: For those who have encountered them before, there is a close analogy between a left action of a monad on a functor, and the action of a monoid (or perhaps more commonly a group) on a set. In fact, at an appropriate level of abstraction, they are both instances of the same construction.

We begin with what is probably the motivating example.

Example: For any Eilenberg-Moore algebra

(A, a : \mathbb{M}(A) \rightarrow A)

we can consider the functor A : 1 \rightarrow \mathcal{D} from the terminal category, which picks out the object A. We use the same name for both the object and the corresponding functor, as both provide essentially the same data. From this perspective, a is a natural transformation a : \mathbb{M} \circ A \Rightarrow A. The axioms for an Eilenberg-Moore algebra exactly correspond to this natural transformation being a left monad action.

So Eilenberg-Moore algebras are a special case of left monad actions. There are of course more general examples:

Example: Let \mathbb{M}^+ be the non-empty multiset monad, and for n \geq 1 let \mathbb{M}^{\geq n} denote the functor forming collections of multisets of at least n elements. Taking unions of multisets induces a monad action:

\alpha : \mathbb{M}^+ \circ \mathbb{M}^{\geq n} \Rightarrow \mathbb{M}^{\geq n}.

Every set X induces an Eilenberg-Moore algebra:

\alpha_X : \mathbb{M}^+(\mathbb{M}^{\geq n}(X)) \rightarrow \mathbb{M}^{\geq n}(X)

Example: Let \mathbb{L} be the list monad, and E : \mathsf{Set} \rightarrow \mathsf{Set} be the functor mapping a set to the collection of even length lists of its elements. There is a left monad action:

\alpha : \mathbb{L} \circ E \rightarrow E

given by list concatenation. Again, every set X induces an Eilenberg-Moore algebra:

\alpha_X : \mathbb{L}(E(X)) \rightarrow E(X)

The previous two examples show that more general left actions can be seen as encoding a uniform family of Eilenberg-Moore algebras.

The notion of left monad action can be dualised. For an endofunctor

A : \mathcal{D} \rightarrow \mathcal{C}

a right monad action is a natural transformation:

\alpha : A \circ M \Rightarrow A

such that the following two axioms hold:

  1. The unit axiom: \alpha \cdot (A \circ \alpha) = \mathsf{id}_A
  2. The multiplication axiom: \alpha \cdot (A \circ \mu) = \alpha \cdot (\alpha \circ \mathbb{M})

We could go further and consider natural transformations where monads act on both the left and right, commonly referred to as bimodules. We could also dualise and consider both left and right (co)actions of comonads, although we shall resist the temptation to explore these related topics for now. For the rest of this post, we shall primarily focus on left actions of monads.

Building Monad Actions

So far, we have seen some reasonably ad-hoc looking examples of left monad actions. The only procedure we have for constructing monad actions is to view any Eilenberg-Moore algebra as a left action.

For a monad (\mathbb{M} : \mathbb{D} \rightarrow \mathbb{D}, \eta, \mu) as before, there are several more methods for systematically construction monad actions. For example:

  1. The monad multiplication \mu : \mathbb{M} \circ \mathbb{M} \Rightarrow \mathbb{M} is a left (and right) monad action.
  2. For A : \mathcal{C} \rightarrow \mathbb{D}, and H : \mathcal{B} \rightarrow \mathcal{C}, if \alpha : \mathbb{M} \circ A \Rightarrow A is a left monad action, then so is \alpha \circ H : \mathbb{M} \circ A \circ H \rightarrow A \circ H.
  3. If \alpha : M \circ A \rightarrow A is a monad action, a \tau : \mathbb{T} \Rightarrow \mathbb{M} a monad morphism, then \alpha \cdot (\tau \circ A) : \mathbb{T} \circ A \Rightarrow A is a left action of \mathbb{T}.

There are other methods, involving adjunctions and Kan extensions, but the constructions above are enough for our remaining discussions.

Monad Actions and Morphisms

Given any monad morphism

\tau : \mathbb{T} \Rightarrow \mathbb{M}

using two of the constructions in the previous section, we can build a monad action

\mu \cdot (\tau \circ \mathbb{M}) : \mathbb{T} \circ \mathbb{M} \Rightarrow \mathbb{M}

In fact, the resulting actions are special. They satisfy an extra compatibility axiom:

\mu \cdot (\alpha \circ \mathbb{M}) = \alpha \cdot (\mathbb{T} \circ \mu)

For any action

\alpha : \mathbb{T} \circ \mathbb{M} \Rightarrow \mathbb{M}

satisfying the compatibility axiom, the composite

\alpha \cdot (\mathbb{T} \circ \eta) : \mathbb{T} \Rightarrow \mathbb{M}

is a monad morphism. Furthermore, these two mapping exhibit a bijection between monad morphisms and monad actions satisfying the compatibility axiom. This handy relationship often allows us to move back and forth between the monad morphism and action perspectives as suites our needs.

Monad Actions and Laws

Given a left monad action \alpha : \mathbb{M} \circ A \Rightarrow A, and any other monad \mathbb{T} : \mathcal{C} \rightarrow \mathcal{C}, we can form the composite:

(A \circ \eta) \cdot \alpha : \mathbb{M} \circ A \Rightarrow  A \circ \mathbb{T}

It is not hard to verify that this is in fact an Eilenberg-Moore law. Forming the dual composite for a right action will yield a Kleisli law.

The point of these two simple constructions is that monad actions can be built and combined in various ways, and this now gives us a new route to building Eilenberg-Moore and Kleisli laws, which have many uses.


Although monad actions are perhaps a slightly more obscure aspect of monad theory than some of the topics we have covered, they are nonetheless useful. Verifying many of the claims above is straightforward, particularly if done using string diagrams, and monad actions interact well with string diagrammatic techniques. Monad actions will also prove handy tools in some future posts. From the point of view of computer science, they seem to be an under-investigated topic, ripe for further exploration.

Further reading: “Modules over Monads and their Algebras” by Piróg, Wu and Gibbons is an excellent computer science oriented source for background, and helped greatly with the examples in this post. The article prefers the term module rather than action, but hopefully this wont be too distracting in relating the ideas to our discussions. They also give nice examples of right actions, which we have paid less attention, interpreting them as computations in context.

Further Kleisli and Eilenberg-Moore laws

Previously, we encountered two special types of distributive laws. Recall that, for a pair of monads \mathbb{S} : \mathcal{C} \rightarrow \mathcal{C}, \mathbb{T} : \mathcal{D} \rightarrow \mathcal{D}, and a functor H : \mathcal{C} \rightarrow \mathcal{D}:

  1. An Eilenberg-Moore law is a natural transformation of type \mathbb{T} \circ  H \Rightarrow H  \circ \mathbb{S}, which appropriately preserves the monad structure. Eilenberg-Moore laws were in bijective correspondence with Eilenberg-Moore liftings of H to a functor \overline{H} : \mathcal{C}^{\mathbb{S}} \rightarrow \mathcal{D}^{\mathbb{T}} commuting with the forgetful functors from the Eilenberg-Moore category.
  2. A Kleisli-law is a natural transformation of type H \circ \mathbb{S} \Rightarrow \mathbb{T} \circ H, which appropriately preserve the monad structure. These were in bijection with Kleisli liftings of H to a functor \underline{H} : \mathcal{C}_{\mathbb{S}} \rightarrow \mathcal{D}_{\mathbb{T}}, commuting with the free functors to the Kleisli category.

In this post, we are going to consider the special case of Kleisli and Eilenberg-Moore laws when H is an endofunctor

H : \mathcal{C} \rightarrow \mathcal{C}

and both monads \mathbb{S} and \mathbb{T} are equal, which we will refer to as

\mathbb{M} : \mathcal{C} \rightarrow \mathcal{C}.

We shall see that in this special case, both Kleisli and Eilenberg-Moore laws have further applications, beyond their natural use lifting functors to Kleisli and Eilenberg-Moore categories.

Connections to Endofunctor Algebras

As H is now an endofunctor, we can consider the category of endofunctor algebras \mathsf{Alg}(H). Noting that our monad \mathcal{M} : \mathcal{C} \rightarrow \mathcal{C} lives on the same category as H, it is natural to ask if we can lift it to a monad on \mathsf{Alg}(H).

As a first step, we assume the existence of a natural transformation

\lambda : H \circ \mathbb{M} \Rightarrow \mathbb{M} \circ H

If such a natural transformation exists, this induces an endofunctor

\overline{M} : \mathsf{Alg}(H) \rightarrow \mathsf{Alg}(H)

with action on objects

\overline{M}(A,a : H(A) \rightarrow A) := M(a) \cdot \lambda_A

and on morphisms

\overline{M}(h) := M(h)

So we can at least lift the endofunctor component to the category of algebras. We need to find components for the unit \overline{\eta} : \mathsf{Id} \Rightarrow \overline{\mathbb{M}} and multiplication \overline{\mu} : \overline{\mathbb{M}} \circ \overline{\mathbb{M}} of the lifted monad. The obvious choices are to take:

\overline{\eta}_{(A,a)} := \eta_{A} \quad\mbox{and}\quad \overline{\mu}_{(A,a)} = \mu_A

Of course, we must verify that these morphisms are actually H-algebra homomorphisms. It turns out that this is the case if \lambda is a Kleisli law. It is also then straightforward to verify that these components form natural transformations, and satisfy the monad axioms, with the details following fairly directly from the same properties in the base category.

So we have shown that, given a Kleisli law \lambda : H \circ \mathbb{M} \Rightarrow \mathbb{M} \circ H

  1. There is an induced monad (\overline{\mathbb{M}} : \mathsf{Alg}(H) \rightarrow \mathsf{Alg}(H), \overline{\eta}, \overline{\mu}).
  2. The forgetful functor U : \mathsf{Alg}(H) \rightarrow \mathcal{C} commutes with the monad structure, in that \mathbb{M} \circ U = U \circ \overline{\mathbb{M}}, U \circ \overline{\eta} = \eta \circ U and U \circ \overline{\mu} = \mu \circ U.

So, given a Kleisli law, the corresponding monad lifts in a very nice way to the category of endofunctor algebras.

Example: Given a commutative monad \mathbb{M}, the canonical natural transformation

\mathsf{dst}_X : \mathbb{M}(X) \times \mathbb{M}(X) \rightarrow \mathbb{M}(X \times X)

is a Kleisli law. For example, for the powerset monad is commutative with:

\mathsf{dst}_X(U,V) = \{ (u,v) \mid u \in U, v \in V \}

Given an algebra (A, a : A \times A \rightarrow A), the action of the structure map of \overline{M}(A, a : A \times A \rightarrow A) is:

(U,V) \mapsto \{ a(u,v) \mid u \in U, v \in V \}

That is, the algebra operation is applied pointwise on every possible combination of the inputs. A similar strategy can be used to lift any commutative monad to the category of algebras for any polynomial functor.

Connections to Endofunctor Coalgebras

As we have assumed H is an endofunctor, it is equally natural to consider the category of endofunctor coalgebras \mathsf{Coalg}(H). Again, we should ask ourselves if it is possible to lift the monad \mathbb{M} to the category of coalgebras. As before, as first step it is convenient to assume the existence of a natural transformation of an appropriate type, this time:

\lambda : \mathbb{M} \circ H \Rightarrow H \circ \mathbb{M}

With this in place, we can define a functor:

\overline{\mathbb{M}} : \mathsf{Coalg}(H) \rightarrow \mathsf{Coalg}(H)

with action on objects:

\overline{\mathbb{M}}(A, a : A \rightarrow H(A)) := (M(A), \lambda_A \cdot M(a)

and on morphisms:

\overline{\mathbb{M}}(h) = \mathbb{M}(h)

So the existence of any \lambda of the right type allows us to lift endofunctor component of the monad. As in the case of algebras, the obvious choice for the components of the unit and multiplication is to define:

\overline{\eta}_{(A,a)} := \eta_{A} \quad\mbox{and}\quad \overline{\mu}_{(A,a)} = \mu_A

As before, we must confirm that these components are legitimate \mathsf{Coalg}(H)-morphisms. This turns out to be the case when \lambda satisfies the axioms of an Eilenberg-Moore law. As before, verifying naturality and the monad axioms can be reduced to the same properties in the base category.

So we have shown that, given an Eilenberg-Moore law \lambda : \mathbb{M} \circ H \Rightarrow H \circ \mathbb{M}

  1. There is an induced monad (\overline{\mathbb{M}} : \mathsf{Coalg}(H) \rightarrow \mathsf{Coalg}(H), \overline{\eta}, \overline{\mu}).
  2. The forgetful functor U : \mathsf{Coalg}(H) \rightarrow \mathcal{C} commutes with the monad structure, in that \mathbb{M} \circ U = U \circ \overline{\mathbb{M}}, U \circ \overline{\eta} = \eta \circ U and U \circ \overline{\mu} = \mu \circ U.

So Eilenberg-Moore laws induce well-behaved monads on categories of coalgebras.

Example: For a fixed set O, we consider the endofunctor

H := O \times (-)

A coalgebra of this functor is essentially a function of type:

A \rightarrow O \times A

From a computer science perspective, we can think of this as modelling a deterministic process with state space A, and after each computation step outputs an element of O, and continues in a new state. Given an Eilenberg-Moore algebra of the powerset monad o: \mathbb{P}(O) \rightarrow O, there is an Eilenberg-Moore law:

\lambda_X : \mathbb{P}(O \times X) \rightarrow O \times \mathbb{P}(X)

such that

\lambda_X(U) = (o(\{ \pi_1(u) \mid u \in U \}), \{ \pi_2(u) \mid u \in U\} )

So the structure maps of the coalgebra \overline{\mathbb{P}}(A, a) will evaluate a set of states pointwise, using the provided Eilenberg-Moore algebra to calculate a suitable output value from the set of available candidates.


It is interesting that these special cases of Kleisli and Eilenberg-Moore laws allow us to perform further constructions. The fact that Kleisli laws induce monads on categories of algebras is probably not that shocking, but that Eilenberg-Moore laws have something to do with monads on categories of coalgebras is a bit more of a surprise. Probably the main takeaway is that, yet again, the ability to suitably commute monad structure with a natural transformation arises in many situations and is an important idea.

Attendees at the recent Midlands Graduate School who attended my string diagrams course may recognise some of the ideas in this post, as the constructions and calculations are in my opinion significantly easier to grasp graphically.

Further reading: I am not aware of a source for the observations above, although I assume it is folklore / well-known if you hang out at the right sort of parties. I would be interested in knowing if there is a published source to cite. Bart Jacobs wonderful book “Introduction to Coalgebra” has inductive constructions for Kleisli and Eilenberg-Moore laws, which would be useful in playing with the ideas discussed in this post.

Acknowledgments: Thanks to Sjoerd Visscher for spotting a distracting typo in an earlier version.

Codensity Monads and Kan Extensions

In this post, we are going to look at another method of constructing monads. The details are possibly more mathematically complex that those we have discussed so far, hinging on the topic of Kan extensions. We will study this construction at a fairly high level of abstraction, aiming to highlight the wide applicability of the techniques involved.

Kan Extensions

Given a category \mathcal{D}, with a subcategory \mathcal{C}, and a functor G : \mathcal{C} \rightarrow \mathcal{E}, it is natural to ask if there is some systematic way to extend G to the whole of \mathcal{D}, yielding a functor G' : \mathcal{D} \rightarrow \mathcal{E}? Of course, when we look for “systematic” or “canonical” ways of doing things in category theory, we are looking for a construction with a universal property.

Although a natural motivation, the restriction to subcategories is unnecessary. Instead, for any functor J : \mathcal{C} \rightarrow \mathcal{D}, and functor G : \mathcal{C} \rightarrow \mathcal{E}, we are looking for a way to extend G “along J” to construct a functor of type \mathcal{D} \rightarrow \mathcal{E}.

It turns out, there is not one, but two mathematically natural choices, with universal properties given by the following isomorphisms, natural in F:

  1. \mathsf{Lan}_J(G)\Rightarrow F \quad\cong\quad G \Rightarrow F \circ J.
  2. F \circ J \Rightarrow G \quad\cong\quad F \Rightarrow \mathsf{Ran}_J(G).

\mathsf{Lan}_J(G) and \mathsf{Ran}_J(G) are respectively known as the left and right Kan extensions of G along J. Note that this terminology can be reversed in some parts of the literature, so it is always wise to check which universal property is being assumed. Kan extensions arise all over category theory and mathematics, for example in connection to adjunctions, (co)limits and monads, leading to MacLane’s claim “All concepts are Kan extensions”.

If these extensions exist for all G, we have adjunctions involving precomposition with J:

\mathsf{Lan}_J(-) \dashv (-) \circ J \qquad\text{ and }\qquad (-) \circ J \dashv \mathsf{Ran}_J(-)

So left Kan extensions then conveniently yield left adjoints and dually on the right, making the naming convention somewhat logical.

We can rephrase the universal properties for Kan extensions as follows:

  1. For \mathsf{Lan}_J(G) there exists a universal natural transformation \mathsf{run} : G \Rightarrow \mathsf{Lan}_J(G) \circ J \Rightarrow G such that for every \alpha : G \Rightarrow F \circ J there exists a unique \hat{\alpha} : \mathsf{Lan}_J{G} \Rightarrow F such that \alpha = (\hat{\alpha} \circ J) \cdot \mathsf{run}.
  2. For \mathsf{Ran}_J(G) there exists a universal natural transformation \mathsf{run} : \mathsf{Ran}_J(G) \circ J \Rightarrow G such that for every \alpha : F \circ J \Rightarrow G there exists a unique \hat{\alpha} : F \Rightarrow \mathsf{Ran}_J(G) such that \alpha = \mathsf{run} \cdot (\hat{\alpha} \circ J).

So far, we have no insight into whether we can form left and right Kan extensions. Fortunately, there is some good news on this front, if \mathcal{C} is a small category (has a set of objects):

  1. If \mathcal{E} is cocomplete, then \mathsf{Lan}_J(G) exists for all G.
  2. If \mathcal{E} is complete, then \mathsf{Ran}_J(G) exists for all G.

Under these conditions, there are explicit formulae for the Kan extensions in terms of the assumed (co)limits. These constructions are important, but we omit the details as we shall not need them in this post.

Example: It is common to consider functors between presheaf categories

[\mathcal{D}^{op},\mathsf{Set}] \rightarrow [\mathcal{C}^{op},\mathsf{Set}]

induced by precomposing with a functor J : \mathcal{C} \rightarrow \mathcal{D}. By the result above, as \mathsf{Set} is both complete and cocomplete, such functors always have both left and right adjoints, given by the two Kan extensions.

We note that in this way, every such J induces both a monad and a comonad on [\mathcal{D}^{op}, \mathsf{Set}]. These monads are not our main focus for today though.

There is another useful condition for the existence of certain Kan extensions in the presence of an adjunction. Assume L \dashv R with unit and counit \eta : \mathsf{Id} \Rightarrow R \circ L and \epsilon : L \circ R \Rightarrow \mathsf{Id}. Then:

  1. \mathsf{Lan}_R(\mathsf{Id}) = L.
  2. \mathsf{Ran}_L(\mathsf{Id}) = R.

We also have that adjoints preserve Kan extensions in the following sense:

  1. L \circ \mathsf{Lan}_J(G) = \mathsf{Lan}(L \circ G).
  2. R \circ \mathsf{Ran}_J(G) = \mathsf{Ran}(R \circ G).

Kan Lifts

As an aside, we briefly mention there is a related concept of Kan lifts. These answer the question for functors J : \mathcal{C} \rightarrow \mathcal{D}, and G : \mathcal{D} \rightarrow \mathcal{E} of when we can “lift” G to a functor G'  : \mathcal{C} \rightarrow \mathcal{E}. Notice the difference in direction of travel along J versus Kan extensions. Again there are two possible answers, with universal properties given by natural bijections:

  1. \mathsf{Lift}_J(G) \Rightarrow F \quad\cong\quad G \Rightarrow J \circ F.
  2. J \circ F \Rightarrow G \quad\cong\quad F \Rightarrow \mathsf{Rift}_J(G).

\mathsf{Lift}_J(G) and \mathsf{Rift}_J(G) are respectively the left and right Kan lifts of G along J. Again, if these exist for all G, we have adjunctions, now involving post composition with J:

\mathsf{Lift}_J(-) \dashv J \circ (-) \qquad\text{and}\qquad J \circ (-) \dashv \mathsf{Rift}_J(-)

One should be careful that not all the theory transfers as smoothly to lifts as it is for extensions. For example, I am not aware of an explicit construction in terms of (co)limits, or a construction similar to that we see in the next section.

Codensity Monads

For any functor

J : \mathcal{C} \rightarrow \mathcal{D}

we can consider the right Kan extension:

\mathsf{Ran}_J(J) : \mathcal{D} \rightarrow \mathcal{D}

of J along itself. Assuming this exists, we have natural transformations:

\widehat{\mathsf{id}_{\mathsf{Ran}_J(J)}} : \mathsf{Id}_{\mathcal{D}} \Rightarrow \mathsf{Ran}_J(J).


\widehat{ \mathsf{run} \cdot (\mathsf{Ran}_J(J) \circ \mathsf{run}) }: \mathsf{Ran}_J(J) \circ \mathsf{Ran}_J(J) \Rightarrow \mathsf{Ran}_J(J)

(The hat should cover the whole term above, but we are reaching the limits of the latex formatting technology at our disposal).

So we have an endofunctor \mathsf{Ran}_J(J) and two natural transformations with the right types to serve as a unit and multiplication for a monad. It turns out that this construction does in fact yield a monad.

If \mathcal{D} is a complete category, and \mathcal{C} is small, the required Kan extension always exists, so we can use this as another method to generate monads. We may return to this point in a future point, and consider concrete examples. Our aim for today is to understand exactly which monads can be constructed in this way.

Consider a monad \mathbb{T}. Via the Eilenberg-Moore construction, this always arises from an adjunction L \dashv R. From this, we know from the observations about Kan extensions and adjunctions above that:

\mathsf{Ran}_L(\mathsf{Id}) = R

and furthermore

\mathsf{Ran}_R(R) = R \circ \mathsf{Ran}_L(\mathsf{Id}) = R \circ L = \mathbb{T}

Therefore the endofunctor part of the monad always arises as the Kan extension of the right adjoint along itself. In fact, if we’re a bit more careful about tracking the unit and multiplication, we find that up to isomorphism, every monad arises as a codensity monad via this construction. Everything about this argument dualises smoothly, so every comonad arises as a density comonad, giving us another handle on the apparently wilder world of comonads.

Remark: Sometime you will see a claim that “this is a codensity monad”. Given that every monad is, what is normally meant by such a statement? I would normally interpret this as being about the concrete description of the monad. The Right Kan extension can be expressed as a limit, and often in a particular category a specific construction of limits is considered standard, pointing to a concrete representation of interest.


We have rather quickly run through some of the high-level theory of Kan extensions and codensity monads. There is a lot more to say here, particularly in terms of how specific monads arise from the codensity construction applied to mathematically natural choices of functors which aren’t merely resolutions of the chosen monad. We may return to concrete examples in later posts.

Further reading: For Kan extensions and related theory, and many other things, I would recommended “(Co)end Calculus” by Fosco Loregian. I would certainly suggest reading about the novel string diagrammatic approach to Kan extensions in “Kan Extensions for Program Optimisation Or: Art and Dan Explain an Old Trick” by my co-author Ralf Hinze. For intriguing examples of the codensity construction, “Codensity and the Ultrafilter Monad” by Leinster is a fascinating read.

Lawvere Theories

In many previous posts, we have discussed connections between monads and universal algebra. So far, we have seen that given a presentation (\Sigma,E) of an algebraic theory in terms of set of operations \Sigma, and equations E, there is a corresponding monad \mathbb{T} such that:

  • The Eilenberg-Moore algebras of the monad are the algebras described by the chosen presentation.
  • The Kleisli category is the full subcategory of free algebras.

Many other aspects of monad theory can then be stood from this algebraic perspective.

Every monad arising from a presentation in terms of operations and equations is finitary. This is a technical condition saying that certain colimits are preserved by the monad endofunctor, but we will not dwell on the precise details. Intuitively, the idea is that the functor is completely defined by its action on finite objects. The key point is that there is a correspondence between algebraic presentations and finitary monads, for which the categories of algebras coincide. It is essential that the categories of models agree for this relationship to be algebraically meaningful. Note this is not a bijection, as a given finitary monad will have infinitely many potential presentations in terms of operations and equations.

In this post, we will see a third perspective on describing algebraic structures, Lawvere theories. These sit somewhere between conventional universal algebra and monads in their level of abstraction. As with any mathematical phenomenon, each new perspective deepens our understanding, and emphasises new features and intuitions.

The Idea of Lawvere Theories

To be concrete, lets consider what we need to do to describe a monoid. The general situation for different algebraic structures will follow in the same way.

Typically, we first pick out an underlying set M, and two functions:

  1. A zero argument function, or constant, for the unit element. We can identity this as a function 1 \xrightarrow{z} M.
  2. A two argument function, or binary operation, for the monoid multiplication. We can identify this with a function M \times M \xrightarrow{m} M.

Of course, once we have these functions, we can build further ones by combining them together. For example

M \times 1 \xrightarrow{m \circ (z \times \mathsf{id}_M)} M

Which informally we can think of as corresponding to a term m(z,x). Of course, by one of the monoid unit axioms, we require that this composite is equal to

M \times 1 \xrightarrow{\pi_1} M

which we can think of as corresponding to a variable x. So in this way of describing a monoid, we:

  1. Pick out some basic operations, which are maps of the form X^n \rightarrow X.
  2. Build lots of other operations by composing the basic operations together, using category composition, and the properties of products.
  3. Require that certain of the operations are equal.

We note some issues with this naive plan. Firstly, where did the basic operations come from, and why do we distinguish them from the other operations? This bias towards certain operations seems categorically unnatural, and raises the ugly question of whether any further mathematical results we derive might be dependent on our choice of presentation. Secondly, a small but annoying detail. It was a pain that M \times 1 was only isomorphic, and not equal to M when we built our composite terms. If we build up more complicated composites involving higher Cartesian powers, we will spend a lot of time bookkeeping how to bracket expressions like:

M \times ((M \times M) \times M)

for no practical benefit. So putting all this together, it would be convenient if we could describe the theory of monoids as a category with:

  1. Objects the natural numbers, corresponding to the number of arguments of our operations.
  2. (Strict) finite products are given by adding the required arities together m \times n = m + n.
  3. Intuitively, we consider an operation of arity m to be a morphism of type m \rightarrow 1. For example, a binary operation is a morphism 2 \rightarrow 1 and a constant is a morphism 0 \rightarrow 1. Notice that, unlike the common convention, in this case 0 is the terminal object.
  4. A family of n such operations is a morphism of type m \rightarrow n. This is automatic from universal property of the assumed strict finite products.
  5. A diagram commutes in our category if and only if the corresponding operations should be equal in the theory of monoids.

In fact, it is technical important to add a little bit more structure. If we consider the category of sets, and take the full subcategory with objects

\{ \},  \{ 0 \}, \{ 0, 1 \}, \ldots

this category clearly has finite coproducts, given by:

\{ 0,\ldots, m \} + \{ 0, \ldots, n \} = \{ 0, \ldots, m + n \}

If we identify the objects of this category with the natural numbers, via:

n = \{ m \in \mathbb{N} \mid m < n \}

this yields a category with finite coproducts \aleph_0, and therefore a category with finite products \aleph^{op}_0.

Moving away from the special case of monoids, we then define a Lawvere theory to be a category \mathcal{L} with strict finite products, with an identity on objects strict finite product preserving functor I : \aleph^{op}_0 \rightarrow \mathcal{L}. The category \mathcal{L} can be interpreted as encoding the operations and equations of the theory. We shall return to the interpretation of the inclusion I later. A morphism of Lawvere theories H : (\mathcal{L},I) \rightarrow (\mathcal{L}',I') is a finite product preserving functor H : \mathcal{L} \rightarrow \mathcal{L}' such that I' \circ H = I.

What we really wanted to do was describe a class of algebraic structures, using our theory. A model of a Lawvere theory in a category with finite products \mathcal{C} is a finite product preserving functor

A : \mathcal{L} \rightarrow \mathcal{C}

and a homorphism of models is a natural transformation between them. For conventional algebraic structures, we take \mathcal{C} = \mathsf{Set}. Note we don’t require that the models interact appropriately with the product structure, as in fact this is automatic. The category of models of a Lawvere theory and their homomorphisms is denoted

\mathsf{Mod}(\mathcal{L}, \mathcal{C})

This approach of identifying structures with functors from a suitably structured category is known as functorial semantics. Notice the difference with monads as a categorical tool for describing algebraic structures. The Eilenberg-Moore category of a monad captures algebraic structures on a fixed base category, whereas we can straightforwardly considering models of Lawvere theories in any category with finite products. There is a forgetful functor given by evaluation at the object 1:

\mathsf{Mod}(\mathcal{L},\mathcal{C}) \rightarrow \mathcal{C}

In many cases, for example if \mathcal{C} is locally presentable, this functor will have a left adjoint, and will therefore induce a monad on \mathcal{C}. So a single Lawvere theory induces monads on many different base categories.

If we have two Lawvere theories such that we have equivalence:

\mathsf{Mod}(\mathcal{L}, \mathsf{Set}) \simeq \mathsf{Mod}(\mathcal{L}', \mathsf{Set})

then there is an isomorphism \mathcal{L} \cong \mathcal{L}' in the category of Lawvere theories. So there is a very tight correspondence between Lawvere theories and the category of models that they describe.

How to build a Lawvere – take one

So how do we build a Lawvere theory? One approach is to start with a presentation (\Sigma, E). We then form a category with:

  1. Object the natural numbers
  2. Morphisms n \rightarrow m are m-tuples of terms in n variables, quotiented by provable equality in equational logic, with respect to the equations of our presentation.
  3. The product projection onto the ith component of a product corresponds to the term for the ith variable.
  4. Identity morphisms are (necessarily) the tuples of projection morphisms.
  5. Composition is given by substitution of terms.
  6. The inclusion morphism I : \aleph^{op}_0 \rightarrow \mathcal{L} picks out the projection morphism in \mathcal{L}.

So we see the intuition for the inclusion I is that it picks out the trivial terms corresponding to bare variables within the broader collection of operations.

Example: We can build a Lawvere theory for monoids from the usual presentation in terms of a unit and multiplication operation, and unitality and associativity axioms. The category \mathsf{Mod}(\mathcal{L}, \mathsf{Set}) is then equivalent to the usual category of monoids.

Note is important that we do not require models of a Lawvere theory strictly preserves product structure. For the usual products in \mathsf{Set}, if we did this, the category of models for the Lawvere theory of monoids would be empty!

How to build a Lawvere Theory – take two

Instead of starting with a presentation in terms of operations and equations, what if we were given a finitary monad? Can we construct a Lawvere theory directly from this data?

The structure of a Lawvere theory in terms of equivalence classes of terms composed by substitution sounds suspiciously like the algebraic perspective on the Kleisli category of a monad. In fact, the two are very similar, up to some cosmetic details. For a finitary \mathsf{Set}-monad \mathbb{T}, we can construct a Lawvere theory (\mathcal{L},I) as follows:

  1. Form the Kleisli category \mathsf{Set}_{\mathbb{T}}.
  2. Restrict to the full subcategory of objects \{\}, \{ 0 \}, \{ 0, 1 \}, \ldots, and rename the objects to natural numbers in a similar way to the section above.
  3. Take the opposite category.

Tersely, we say the required Lawvere theory is the opposite of (the skeleton of) the full subcategory of finitely generated free algebras. As we would hope, we have an equivalence between \mathsf{Set}^{\mathbb{T}} and \mathsf{Mod}(\mathcal{L}, \mathsf{Set}), so the categories of models coincide.

Monads to Lawvere Theories

Given a Lawvere theory (\mathcal{L},I), we would like to go in the other direction, and construct a corresponding finitary monad. This can be done by taking a coend:

\mathbb{T}(X) = \int^{n \in \aleph_0} \mathcal{L}(n, 1) \times X^{n}

If this is unfamilar, the idea is that we can construct the required monad as a certain categorical colimit. Intuitively, we form the collection of equivalence classes of terms, identifying product structure where appropriate.

Again, the categories of Eilenberg-Moore algebras and models of the Lawvere theory in \mathsf{Set} agree up to equivalence.

In fact, the connection between Lawvere theories and finitary monads is about as strong as we might hope. The category of Lawvere theories is equivalent to the category of finitary monads on \mathsf{Set}. The presence of the inclusion morphism I in the definition of a Lawvere theory ensures that this equivalence goes through correctly.


Lawvere theories give a presentation independent formulation of algebra. They have several strengths:

  1. They are relatively close in form to concepts from conventional universal algebra. In particular, they can be seen as an abstraction of the notion of clone.
  2. Unlike monads, a single object can describe models taken in different categories. For example, we can take models of the Lawvere theory of monoids in sets, posets, topological spaces and so on.
  3. Certain operations for combining theories are more natural from the perspective of Lawvere theories, rather than that of finitary monads.
  4. They are algebraic by construction. By comparison, monads include structures such as the continuation monad, which don’t have a clear interpretation in terms of universal algebra.

There are many generalisations of Lawvere theories, for example to incorporate operations of larger aritites, or enrichments of the base category. These typically come with a correspondence to a natural class of monads, providing a different perspective to work with. The functorial semantics of Lawvere theories also suggests other structures, such as PROs and PROPs, which move us further away from convention algebra to structures potentially having multiple outputs as well as inputs.

One intriguing question is what is the equivalent of Lawvere theories for comonads. At this point there does not seem to be a satisfactory answer, as highlighted in the paper “Category Theoretic Understandings of Universal Algebra and its dual: monads and Lawvere theories, comonads and ?” This example highlight how comonad theory cannot always be seen as routinely “flipping monad theory upside down”, and sometimes a routine application of duality isn’t enough.

Further reading: An excellent source for background is the book “Algebraic Theories” by Adamek, Rosicky and Vitale. Also useful for background is Hyland and Power’s “The Category Theoretic Understanding of Universal Algebra: Lawvere Theories and Monads”. Power has developed much of the theory generalising Lawvere theories in various ways, and applying them in computer science.