In our everyday mathematics, we take for granted that addition is commutative and associative, that multiplication distributes over addition, and so on. But what if we had to build arithmetic from nothing but the concept of "next number"? In this post, let us construct natural numbers and their operations in Agda, proving each arithmetic property step by step using structural induction.
Natural Numbers
We can inductively define natural numbers in Agda like so:
data Nat : Set where
: Nat
zero : Nat → Nat
succ {-# BUILTIN NATURAL Nat #-}
The BUILTIN NATURAL
pragma tells Agda to treat Haskell integers as natural numbers, allowing us to write 3
instead of succ (succ (succ zero))
.
More importantly, this makes operations efficient—without this pragma, each natural number n
would take O(n) memory instead of O(log n).
Note that this definition captures exactly what natural numbers are: either zero, or the successor of some other natural number.
Proving vs Testing
Before we dive into the proofs, it's worth emphasizing what we mean by "proving" here.
Unlike testing, which can only show that a property holds for specific examples, our proofs in Agda establish that properties hold for all natural numbers.
When we prove +-comm
, we're not just checking that 2 + 3 ≡ 3 + 2
, but that addition is commutative for every possible pair of natural numbers.
This is made possible by Agda's equality type and the proof techniques we established in our previous post, where we saw how refl
, sym
, trans
, and cong
work together to enable equational reasoning.
Addition
Addition is defined recursively on the first argument:
_+_ : Nat → Nat → Nat
0 + m = m
(succ n) + m = succ (n + m)
This definition embodies the intuition that adding m
to the successor of n
is the same as taking the successor of n + m
.
Associativity
One of the most basic properties of addition is associativity:
: ∀ (n m o : Nat) → n + (m + o) ≡ (n + m) + o
+-assoc = refl
+-assoc zero m o (succ n) m o = cong succ (+-assoc n m o) +-assoc
The proof proceeds by structural induction on n
:
- Base case
n = 0
: We need to show0 + (m + o) ≡ (0 + m) + o
, which simplifies tom + o ≡ m + o
by the definition of addition. - Inductive case
n = succ n
: We need to showsucc n + (m + o) ≡ (succ n + m) + o
. By the definition of addition, this becomessucc (n + (m + o)) ≡ succ ((n + m) + o)
. Using congruence and the inductive hypothesis, this follows immediately.
Right Identity and Successor Lemmas
To prove commutativity, we first need two helper lemmas that aren't immediately obvious from our definition:
: ∀ (m : Nat) → m ≡ m + 0
+-m-z = refl
+-m-z zero (succ m) = cong succ (+-m-z m)
+-m-z
: ∀ (n m : Nat) → succ n + m ≡ n + succ m
+-m-s = refl
+-m-s zero m (succ n) m = cong succ (+-m-s n m) +-m-s
The first lemma shows that 0
is a right identity for addition (not just a left identity as our definition makes obvious).
The second shows that we can "move" a successor from the first argument to the second.
Both require induction to prove, highlighting how even simple properties need careful justification when building from first principles.
Commutativity
With these lemmas in hand, we can prove commutativity:
: ∀ (n m : Nat) → n + m ≡ m + n
+-comm = +-m-z m
+-comm zero m (succ n) m = trans (cong succ (+-comm n m)) (+-m-s m n) +-comm
The proof again proceeds by induction on n
:
- Base case
n = 0
: We need0 + m ≡ m + 0
, which follows from our right identity lemma. - Inductive case
n = succ n
: We needsucc n + m ≡ m + succ n
. This becomessucc (n + m) ≡ m + succ n
. By the inductive hypothesis,n + m ≡ m + n
, sosucc (n + m) ≡ succ (m + n)
. Finally, our successor lemma gives ussucc (m + n) ≡ m + succ n
.
Multiplication
As expected, multiplication is defined in terms of addition:
_*_ : Nat → Nat → Nat
0 * m = 0
(succ n) * m = m + (n * m)
This captures the idea that multiplying by the successor of n
means adding m
to the result of multiplying by n
.
Distributivity
Multiplication distributes over addition:
: ∀ (n m o : Nat) → (n * o) + (m * o) ≡ (n + m) * o
*-+-dist = refl
*-+-dist zero m o (succ n) m o = trans (sym (+-assoc o (n * o) (m * o))) (cong (λ c → o + c) (*-+-dist n m o)) *-+-dist
The proof is by induction on n
:
- Base case
n = 0
: We need(0 * o) + (m * o) ≡ (0 + m) * o
, which simplifies to0 + (m * o) ≡ m * o
. - Inductive case: We need
(succ n * o) + (m * o) ≡ (succ n + m) * o
. This becomes(o + (n * o)) + (m * o) ≡ o + ((n + m) * o)
. Using associativity of addition and the inductive hypothesis, this follows.
Associativity
Finally, multiplication is associative:
: ∀ (n m o : Nat) → n * (m * o) ≡ (n * m) * o
*-assoc = refl
*-assoc zero m o (succ n) m o = trans (cong (λ c → (m * o) + c) (*-assoc n m o)) (*-+-dist m (n * m) o) *-assoc
The proof combines our previous results:
- Base case
n = 0
: Trivial by definition. - Inductive case: We need
succ n * (m * o) ≡ (succ n * m) * o
. This becomes(m * o) + (n * (m * o)) ≡ (m + (n * m)) * o
. Using the inductive hypothesis and distributivity, this follows.
Exercises
- Prove that
0
is a right identity for multiplication:∀ (m : Nat) → m * 0 ≡ 0
. - Prove that multiplication is commutative:
∀ (n m : Nat) → n * m ≡ m * n
. (Hint: You'll need helper lemmas similar to those we used for addition.) - Define exponentiation and prove that
(n ^ m) ^ o ≡ n ^ (m * o)
.
Conclusions
The beauty of this approach is that every property we "obviously know" about arithmetic must be carefully justified. In building mathematics from the ground up, we see exactly which assumptions we need and how they combine to give us the rich structure of arithmetic.