Monad bits and pieces

Previous posts have focussed quite a lot on general theory. In this post, we are going to examine a couple of specific monads in a fair amount of detail. Both can be seen as dealing with computations parameterised by a notion of external state.

The Reader or Environment Monad

For any set R, there is a monad with:

  • Endofunctor: The monad endofunctor is (-)^R.
  • Unit: The unit at X maps its input to the constant function x \mapsto (\lambda r.\; x).
  • Multiplication: The multiplication acts as follows \mu_X(f)(r) = f(r)(r).

This monad is called the reader or environment monad. The computational interpretation of this monad is that a function X \rightarrow Y^R produces an output that depends on some state or environment it can read. Note the environment is fixed, computations cannot make modifications to it – this will be addressed later.

Aside: This monad arises via an adjunction involving slice categories. There is an obvious forgetful functor U : \mathsf{Set} / R \rightarrow \mathsf{Set}. This has a right adjoint (-)^*, with action on objects A^* = A \times R \xrightarrow{\pi_2} R. In turn, this functor has a further right adjoint \Pi. The composite \Pi \circ (-)^* induces the reader monad.

At this level of abstraction, it is not immediately obvious that this adjunction induces the right monad. To at least convince ourselves that the endofunctors agree, we note there is then a series of bijections between hom sets:

  1. A \rightarrow \Pi(B^*)
  2. A^* \rightarrow B^*
  3. U(A^*) \rightarrow B

We then note that U(A^*) = A \times R, and via Cartesian closure functions A \times R bijectively correspond to functions A \rightarrow A^R. Therefore, \Pi \circ (-)^* \cong (-)^R. (The abstract argument generalises well beyond \mathsf{Set}).

Our main interest is to consider the reader monad from an algebraic perspective. To keep things as simple as possible, we are going to choose R = \{ 0, 1 \}. So computations have access to a single binary bit of external data, which can be set low (0) or high (1). As usual, we would like to have an algebraic presentation of this monad. It is natural to introduce a binary lookup operation \mid, such that informally we interpret

x \mid y

as a computation which lookups up what to do depending on the environment value, continuing as x if the bit is low, or y if it is high. We now need to identify some suitable equational axioms that this operation should respect. An immediate idea is to require the operation to be idempotent, that is

x \mid x \;=\; x

Intuitively, if we do x regardless of whether the environment bit is high or low, that is the same as behaving exactly as x. A second axiom that springs to mind is:

(w \mid x) \mid (y \mid z) \;=\; w \mid z

Here the idea is that:

  1. If the bit is low, we will act as the left component of the left component.
  2. If the bit is high, we will act as the right component of the right component.

In fact, it will turn out that these two axioms are exactly what we need. A simple consequence of the axioms is that:

x \mid (y \mid z) \;=\; (x \mid x) \mid (y \mid z) \;=\; (x \mid z) \;=\; (x \mid y) \mid (z \mid z) \;=\; (x \mid y) \mid z

So the operation | is associative. So we can now avoid the annoyance of writing brackets everywhere. We also notice that:

x \mid y \mid z \;=\; x \mid y \mid z \mid z \;=\; x \mid z

From the above equations, if we see a sequence of \mid operations, we can eliminate everything but the end points. Combining this with idempotence, it is hopefully not two hard to convince yourself that every term is equal to a unique term of the form x \mid y. So the equivalence classes of terms over X can be identified with pairs of elements from X, equivalently elements of X^{\{ 0, 1\}}.

Finally, we note that

x \mid y \mid x \;=\; x \mid x \;=\;  x

In fact, the following are equivalent for a structure over a binary operator \mid:

  1. The equations x \mid x \;=\; x and (w \mid x) \mid (y \mid z) \;=\; w \mid z hold.
  2. The equations x \mid (y \mid z) \;=\; (x \mid y) \mid z and (x \mid y) \mid x \;=\; x hold.

For the bottom to top direction, using associativity to ignore brackets:

x \mid z \;=\; x \mid y \mid x \mid z \mid y \mid z \;=\; x \mid y \mid z

Using this ability to insert arbitrary elements twice, we calculate:

w \mid z \;=\; w \mid x \mid z \;=\; w \mid x \mid y \mid z

Finally, to show idempotence, reusing the first equation we proved:

x \;=\; x \mid y \mid x \;=\; x \mid x

The first set of equations we derived from computational intuitions, the second set of equations identify these structures as naturally occurring algebraic objects known as rectangular bands. (A band is an associative idempotent binary operation.) So the Eilenberg-Moore category of the reader monad on a bit is the category of rectangular bands and their homomorphisms.

The State Monad

We have already encountered the state monad, as the monad induced by the Cartesian closed structure on \mathsf{Set}. More precisely, we should refer to this as the global state monad.

Explicitly, for a fixed set of states S, this monad has:

  • Endofunctor: S \Rightarrow (- \times S).
  • Unit: \eta_X(x)(s) = (x,s).
  • Multiplication: \mu_X(f)(s) = \mathsf{let}\; (g,s') = f(s) \; \mathsf{in} \; g(s').

Again, we are going to keep things as simple as possible, and consider a one bit state space, so S = \{ 0, 1 \}. We would like to find an equational presentation for this monad. An obvious place to start is to extend the presentation for the one bit reader monad above, which gives us the infrastructure to decide how to proceed based on the current state. A natural next step is to add operations to manipulate the bit. We could add two unary update operations, say with \mathop{\uparrow}(x) and \mathop{\downarrow}(x) meaning set the state high (low) and proceed as x. We can actually make do with a single bit flip operation, which we shall denote \mathop{\updownarrow}(x) with the intuitive reading of flip the state to its opposite value, and proceed as x. We quickly note that we can define:

\mathop{\uparrow}(x) \;:=\; \mathop{\updownarrow}(x) \mid x \quad\mbox{ and }\quad \mathop{\downarrow}(x) \;:=\; x \mid \mathop{\updownarrow}(x)

So the explicit bit setting operations are definable by only flipping the bit if it is the wrong value. Obviously, we expect that

\mathop{\updownarrow}\mathop{\updownarrow}(x) = x

as flipping the bit twice should leave us back where we started. We also expect flipping a bit to reverse how computation proceeds:

\mathop{\updownarrow}(x \mid y) = \mathop{\updownarrow}(y) \mid \mathop{\updownarrow}(x)

In fact, these equations give a presentation of the one bit state monad. Using the chosen equations, every term is equal to one of the form

s \mid t

where both s and t are either a variable, or of the form \mathop{\updownarrow}(x). This encodes a function taking a bit, saying left or right, and return a variable, and applying a flip to the state if appropriate. That is, an element of \{ 0, 1 \} \Rightarrow X \times \{ 0, 1 \}.


To be more realistic, we really should consider larger state spaces or environments. Algebraically, this means instead of a binary lookup operation \mid specifying which of two choices to take depending on a bit value, for an environment with n possible values, we require an n-ary operation. As infix notation is no longer appropriate, we shall write lookup as:


The two axioms that induce the reader monad are the obvious extensions of those for a binary bit.

You may be wondering what to do if the environment is infinite in size. There are well-defined generalisations of universal algebra where operations of these bigger arities are permitted. As long as the maximum size of the operations is bounded, this all pretty much works as in the finite case, so we shall stick with that to keep the notation clearer.

For the state monad, we also need to think about how to change the state. Generalising the bit flip operation seems unnatural, for example should we rotate through the states in some arbitrary order? A better choice is to make the update operations primitive. Without loss of generality, it is notationally convenient to fix a state space S = \{1,\ldots,n\} of natural numbers. We introduce a unary operation u_i for each i \in S. The intuitive reading is that u_i(x) updates the state to i and continues as u. The equations we require, with their intuitive readings are:

  1. l(u_1(x),\ldots, u_n(x)) = x. If we don’t depend on the state, we can ignore it.
  2. l(l(x_{1,1},\ldots,x_{1,n}),\ldots,l(x_{n,1},\ldots,x_{n,n})) = l(x_{1,1},\ldots,x_{n,n}). If the state isn’t modified, we keep making the same choices.
  3. u_i(u_j(x)) = u_j(x). A second state change overwrites the first.
  4. u_i(l(x_1,\ldots,x_n)) = u_i(x_i). Update decides a subsequent lookup.

You may have expected the first axiom to be

l(x,\ldots,x) = x

In fact, this equation is derivable using the first two. The binary case is:

x = l(u_1(x), u_2(x)) = l(l(u_1(x), u_2(x)),l(u_1(x),u_2(x))) = l(x,x)

The general calculation is just notationally a bit harder to read.

The equations above appear in “Notions of Computation Determine Monads” by Plotkin and Power. In fact, they consider a further generalisation, reflecting a more realistic computational setting. We now consider a situation in which we have a set of memory addresses A, and each state has its own independent state value.

If we assume each memory location can take n possible values, we require an n-ary lookup operation l_a for each a \in A, and update operations u_{a,i}, intuitively setting address a to value i. We require the previous equations for each address:

  1. l_a(u_{a,1}(x),\ldots, u_{a,n}(x)) = x. If we don’t depend on the state at an address, we can ignore it.
  2. l_a(l_a(x_{1,1},\ldots,x_{1,n}),\ldots,l_a(x_{n,1},\ldots,x_{n,n})) = l_a(x_{1,1},\ldots,x_{n,n}). If the state at an address isn’t modified, we keep making the same choices when looking up at that address.
  3. u_{a,i}(u_{a,j}(x)) = u_{a,j}(x). A second state change at the same address overwrites the first.
  4. u_{a,i}(l_a(x_1,\ldots,x_n)) = u_{a,i}(x_i). Update decides a subsequent lookup at the same address.

We also require some additional axioms, which encode that the states at different addresses don’t interfere with each other.

  1. l_a(l_{a'}(x_{1,1},\ldots,x_{1,n}),\ldots,l_{a'}(x_{n,1},\ldots,x_{n,n})) = l_{a'}(l_a(x_{1,1}, \ldots, x_{1,n}),\ldots,l_a(x_{1,n},\ldots,x_{n,n})) where a \neq a'. Choices based on different addresses commute with each other.
  2. u_{a,i}(u_{a',j}(x)) = u_{a',j}(u_{a,i}(x)) where a \neq a'. Consecutive updates at different addresses commute with each other.
  3. u_{a,i}(l_{a'}(x_1,\ldots,x_n)) = l_{a'}(u_{a,i}(x_1),\ldots, u_{a,i}(x_n)) where a \neq a'. Updates and lookups at different addresses commute with each other.

The resulting monad is the state monad on A \times \{ 1,\ldots,n \}. Although the algebraic perspective on the reader and state monads was somewhat involved, this does pay off. For example, we now have the data needed to introduce algebraic operations and effect handlers for these monads.

As usual, there are lots more details to take care of in realistic work, such as moving beyond the category \mathsf{Set}, but the underlying intuitions transfer well. Interested readers should read the Plotkin and Power paper mentioned above, which also covers more advanced topics, such as a monad for local state.

2 thoughts on “Monad bits and pieces”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: