In previous posts we have seen a lot of the basic mathematical infrastructure of monads. The aim of this post is to adopt a computational view on monads, viewing as effectful computations over an object . From this perspective, we shall consider two natural questions:
- How can we build such computations? In other words, how do we get things into ?
- What can we do with computations once have have built them? In other words, how to we get things back out of ?
There is a lot of discussion of these topics in the literature. As usual, we will try and keep things simple, restricting attention to simple monads on , and concentrating on the key ideas.
So far, the only way have have seen to build elements of is to apply the monad unit .
Example: For the exception monad, for set of exceptions , the value is a trivial computation that returns the value , and doesn’t throw an exception.
Example: The finite powerset monad can be seen as a form of bounded non-determinism. is a trivially non-deterministic computation that always results in the value .
Example: The list monad can be seen as a form of non-determinism which can be easier to implement in a programming language. It is unusual as there is an order on computations, and repeated values are allowed. Obviously another perspective is to see such computations as living in a list data structure. Again is the trivial singleton list containing only the value .
The intuition we derive from these examples is that yields a trivially effectful computation in some suitable sense. To get any real use out of these computational effects, we need to be able to build less boring examples as well. To solve this problem, as usual we shall adopt an algebraic perspective. Assume we have an equational presentation . An -ary operation induces a function as follows:
Continuing the examples above:
Example: The exception monad has a presentation consisting of a constant for each exception in . These induce functions with action , which we can think of as raising the exception .
You may wonder where the notion of catching or handling an exception appears. This is a good question, and will be addressed in the next section.
Example: The finite powerset monad has a presentation with a binary operation and a constant element . The constant induces a function with action , which we can think of as picking out a diverging computation.
The binary operation induces an operation with action , which takes two non-deterministic computations, and combines their possible behaviours. For example is a computation yielding either or .
Example: Similarly to the previous example, the list monad has a presentation with binary operation , and a constant element . The constant induces a function with action .
The binary operation induces a function with action concatentating two lists. We can interpret these operations as simply manipulating data structures, or more subtly as a form of non-determinism in which the operations respect the order on choices.
Rather than describe these operations directly in terms of substitutions, it would be better to phrase things in terms of the available monad structure. To do so, we abuse notation and let denote the set . A family of equivalence classes of terms over , , can be identified with a morphism , and a single -ary term with a map . The composite , where denotes the Kleisli extension of , corresponds to the substitution we are interested in:
In fact, we have generalised slightly, as is now an arbitrary term, not just an operation appearing in the signature.
Of course, we expect these mapping induced by Kleisli morphism to behave in a suitably uniform way. That is, they should be natural in some sense. This actually requires a modicum of care to get right. We should really consider our mapping as being of type . Here, is the forgetful functor from the Kleisli category. This has action on objects , so this rephrasing makes sense. We then note that the induced maps:
are natural. This is sometimes phrased as being natural with respect to Kleisli morphisms. They also trivially interact well with the strength morphisms in . In more general categories, there is a bijection between:
- Families of morphisms Kleisli natural in , and interacting well with monad strength. These are referred to as algebraic operations.
- Morphisms , referred to as generic effects.
To give a denotational semantics for a serious programming language, a lot more details need filling in. The category is not suitable for realistic work, but is certainly a helpful way to build up the basic intuitions.
Now we have some tools for building up elements of , what can we do with them? The unit and multiplication of a monad have types and , so so they give us no way “out of the monad”.
What we are looking for is a systematic way of building well-behaved maps of type for some . The key idea is to recall the Eilenberg-Moore adjunction. For a set , the free algebra over is:
The universal property of this adjunction means that if we can identity another Eilenberg-Moore algebra , and a homomorphism
these bijectively corresponds to algebra homomorphisms:
That is, morphisms
respecting the algebraic structure. It is worth considering the roles of these components:
- The homomorphism specifies how values should be interpreted.
- The algebra specifies how the structure used to build up an effectful computation should be evaluated, in a manner respecting the structure encoded by the monad.
As usual, this is probably best understood by considering some examples.
Example: For the exception monad over , an Eilenberg-Moore algebra must act as the identity on the left component of the coproduct, and send each element of the right component to any value of its choosing in . In other words, we specify a default value to take for each of the exceptions that might be raised. Given a function , the induced algebra morphism will map values according to , and otherwise chose the value specified by when it encounters a raised exception.
Example: For the finite powerset monad , giving an Eilenberg-Moore algebra is equivalent to specifying a join semilattice structure. For example, we can consider the Eilenberg-Moore algebra on the natural numbers specified by taking the maximum of pairs of elements, and bottom value . The resulting homomorphism induced by the identity will map a finite set to its maximum value (inevitably defaulting to if the set is empty).
Example: The pattern for the list monad is similar. An Eilenberg-Moore algebra is a monoid, so we must specify a multiplication and unit element. For example if is the set of two by two matrices over the natural numbers, this carries a monoid structure under matrix multiplication. The identity on then induces a map that multiplies a list of matrices together from left to right. Note that this example fundamentally exploits the fact that the multiplication operation is not forced to be commutative.
The ideas in this section are the basics of what are known as effect handlers. As with algebraic operations, to produce a denotational semantics for a realistic programming language, and more complex handlers, there are many details to take care of, and we must move beyond . However, this simplified setting is sufficient to illustrate the key ideas, without dealing with distracting technical details.
We have have seen the basic mathematical machinery to build effectful computations within a monad, and also how to evaluate these computation to values. The main points are:
- The morphisms that allow us to build elements of arise naturally from an algebraic presentation of the monad.
- The morphisms that allow us to evaluate computations in require us to make choices as to how that evaluation should proceed, specifically a suitable Eilenberg-Moore algebra, and then exploit the universal property of free algebras.
The literature in this area contains many nice examples, and is enjoyable to read. For algebraic operations and generic effects, search for work of Plotkin and Power, and for effect handlers, the work of Pretnar, Plotkin and Bauer are good places to start reading. There is a lot more to these topics than the details we have sketched here, so it is well worth exploring further.
We have also been limited in our example by the relatively simple monads we have introduced so far. We will see more interesting monads from a computational perspective in later posts.