# Commutativity Algebraically

Recall the substitution notation we have been using: $t[t_x / x \mid x \in X]$

denotes the term $t$, with each $x \in X$ simultaneously replaced by the term $t_x$. For an equational presentation $(\Sigma, E)$, inducing a monad $\mathbb{T}$, consider a pair: $([s],[t]) \in \mathbb{T}(X) \times \mathbb{T}(Y)$

where $[t]$ denotes an equivalence class with representative term $t$. We can then write the action of the first double strength map in terms of representatives and substitutions as: $\mathsf{dst}([s],[t]) = [t[s[(x,y)/x \mid x \in X] / y \mid y \in Y]]$

and the second: $\mathsf{dst}'([s],[t]) = [s[t[(x,y)/y \mid y \in Y] / x \mid x \in X]]$

Therefore, the resulting monad is commutative if and only if for all $s,t$, the following equality is provable in equational logic: $t[s[(x,y)/x \mid x \in X] / y \mid y \in Y] = s[t[(x,y)/y \mid y \in Y] / x \mid x \in X]$

We shall call this the commutativity condition.

The substitutions can be made a bit easier to read with a change of notation. For two terms $s,t$, let $x_1,\ldots,x_m$ and $y_1,\ldots,y_n$ be a choice of enumeration of the variables appearing in the two terms. We can then write them as $s(x_1,\ldots, x_m) \quad\mbox{ and }\quad t(y_1,\ldots,y_n)$

and writing substitution in the natural way, the commutativity condition becomes: $t(s((x_1,y_1),\ldots,(x_m,y_1)),\ldots, s((x_1,y_n),\ldots,(x_m,y_n))) = s(t((x_1,y_1),\ldots,(x_1,y_n)),\ldots,t((x_m,y_1),\ldots,(x_m,y_n)))$

That’s an awful lot of formal notation and brackets to deal with. Let’s consider some implications of this condition to build intuition for what it means in practice.

Example: Consider a presentation with binary operations $+$ and $\times$. The action of the double strengths on $[x + x'] \in \mathbb{T}(X)$ and $[y \times y'] \in \mathbb{T}(Y)$ are: $\mathsf{dst}([x + x'], [y \times y']) = [((x,y) + (x',y)) \times ((x,y') + (x',y'))]$

and $\mathsf{dst}'([x + x'], [y \times y']) = [((x,y) \times (x,y')) + ((x',y) \times (x',y'))]$

If the monad $\mathbb{T}$ is commutative, the following equality must be provable in equational logic: $((x,y) + (x',y)) \times ((x,y') + (x',y')) = ((x,y) \times (x,y')) + ((x',y) \times (x',y'))$

As a special case of this observation, any monad presented by a binary operation must satisfy the following equation: $((x,y) + (x',y)) + ((x,y') + (x',y')) = ((x,y) + (x,y')) + ((x',y) + (x',y'))$

If $+$ is associative, this boils down to: $(x,y) + (x',y) + (x,y') + (x',y') = (x,y) + (x,y') + (x',y) + (x',y')$

The only difference between the two terms is the order of the middle two constants, so this equality will certainly hold for any associative commutative binary operation. This condition is satisfied in many natural algebraic structures, and does exhibit a very weak relationship between commutative operations and commutative monads.

An instructive boundary case is the following.

Example: Consider two terms $s,t$ in which no variables appear. These are constants in our equational theory. The commutativity condition implies that: $s = t$

as all the variable substitutions become trivial. Therefore any equational presentation of a commutative monad can have at most one distinct constant term. We have already encountered this phenomenon. The exception monad is only commutative when there is at most one exception constant.

“Constant counting” can be a quick way to discount the possibility that a monad for an equational presentation is commutative. For example the monads corresponding to commutative rings or rigs (semirings) cannot be commutative as they have distinct constant symbols for the additive and multiplicative structures. These are further natural examples of monads with all the binary operations in the signature commutative, but the resulting monads are not commutative.

Another simple case is worth considering.

Example: Let $s \in \mathbb{T}(X)$ and $t \in \mathbb{T}(Y)$ be terms in which only one variable appears, say $x_0$ and $y_0$ respectively. Then the left hand side of the commutativity condition is: $t[s[(x,y)/x \mid x \in X] / y \mid y \in Y] = t(s((x_0,y_0)))$

and the right hand side is: $s[t[(x,y)/y \mid y \in Y] / x \mid x \in X] = s(t((x_0,y_0)))$

So for $\mathbb{T}$ to be commutative, each pair of unary terms must satisfy $s(t(x)) = t(s(x))$.

As an application of this special case, we introduce another commonly considered monad. For a fixed monoid $(M,\times,1)$ the writer monad has:

• Endofunctor: The endofunctor is the product $M \times (-)$.
• Unit: $\eta(x) = (1,x)$.
• Multiplication: $\mu(m,(n,x)) = (m \times n, x)$.

Computationally, this monad can be seen as encoding computations that also record some additional output such as logging. The monoid structure combines the outputs from successive computations. Algebraically, it corresponds to actions of the monoid $M$.

The writer monad has an equational presentation with one unary operational symbol for each element of the monoid, and equations:

• $1(x) = x$.
• $p(q(x)) = r(x)$ if and only if $p \times q = r$ in the monoid.

Using our observation above, the writer monad is commutative if and only if the monoid $M$ is commutative. So in this case, we do see a strong connection between the monadic and algebraic notions of commutativity.

Another useful boundary case is to consider what the commutativity condition means for variables.

Example: Consider a variable $x_0$ and an an arbitrary term $t$. The left hand side of the commutativity condition is: $t[x_0[(x,y)/x \mid x \in X] / y \mid y \in Y] = t[(x_0,y) \mid y \in Y]$

and the right hand side is: $x_0[t[(x,y)/y \mid y \in Y] / x \mid x \in X] = t[(x_0,y) \mid y \in Y]$

So variables always satisfy the commutativity condition with respect to any other term. With hindsight, maybe we should not find this too surprising.

We will now consider an important example, which will point the way to getting a better handle on the unpleasant looking general commutativity condition we deduced above.

Example: We now consider an equational presentation with a constant term $0$ and a binary term $x + x'$ and a unary term $h$. The equations yielded by the commutativity condition for the pairs $([h],)$ and $([h],[x + x'])$ are: $h(0) = 0 \quad\mbox{ and }\quad h((x,y) + (x',y)) = h((x,y')) + h((x',y'))$

we can simplify the second condition by renaming variables, and we arrive at the conditions: $h(0) = 0\quad\mbox{ and }\quad h(x + x') = h(x) + h(x')$

These conditions should hopefully look familiar, they are exactly the equations we require for $h$ to be a homomorphism with respect to $+$ and $0$.

We now aim to make the connection with homomorphisms from the previous example precise by making two observations:

1. For positive natural $k$ and set $Z$, the term $s(x_1,\ldots,x_m)$ induces an $m$-ary operation on $\mathbb{T}^k(Z)$ with action $(([t_{1,1}],\ldots,[t_{1,k}]),\ldots,([t_{m,1},\ldots,t_{m,k}])) \mapsto ([s(t_{1,1},\ldots,t_{m,1})],\ldots,[s(t_{1,k},\ldots,t_{m,k})])$.
2. Similarly, the term $t(y_1,\ldots,y_m)$ induces an $n$-ary function $\mathbb{T}(Z)^{n} \rightarrow \mathbb{T}(Z)$ with action $([t_1],\ldots, [t_n]) \mapsto [t(t_1,\ldots,t_n)]$

The homomorphism condition is equivalent to saying that the $n$-ary function $\mathbb{T}(X \times Y)^{n} \rightarrow \mathbb{T}(X \times Y)$ induced by $t$ is a homomorphism with respect to the $m$-ary operation induced by $s$. In this sense, the commutativity conditions can be rephrased as all the terms are homomorphisms with respect to each other.

From the algebraic perspective a monad is commutative in the sense that all terms can be commuted past each other as homomorphisms.

Posted on