Automatic Differentiation in 38 lines of Haskell using Operator Overloading and Dual Numbers. Inspired by conal.net/papers/beautiful-differentiation

3 min read Original article ↗

Automatic Differentiation in 38 lines of Haskell using Operator Overloading and Dual Numbers. Inspired by conal.net/papers/beautiful-differentiation

(See discussion on Hacker News)

You can now differentiate (almost1) any differentiable hyperbolic, polynomial, exponential, and/or trigonometric function. Let's use the polynomial $f(x) = 2x^3 + 3x^2 + 4x + 2$, whose straightforward derivative, $f'(x) = 6x^2 + 6x + 4$, can be used to verify the AD program.

λ> f x = 2 * x^3 + 3 * x^2 + 4 * x + 2 -- our polynomial
λ> f 10
2342
λ> diff f 10 -- evaluate df/dx with x=10
664.0
λ> 2*3 * 10^2 + 3*2 * 10 + 4 -- verify derivative at 10
664

We can also compose functions:

λ> f x = 2 * x^2 + 3 * x + 5
λ> f2 = tanh . exp . sin . f
λ> f2 0.25
0.5865376368439258
λ> diff f2 0.25
1.6192873

And differentiate high-dimensional functions, such as $f: \mathbb{R}^m \rightarrow \mathbb{R}^n$, by doing manually what diff is doing:

λ> f x y z = 2 * x^2 + 3 * y + sin z        -- f: R^3 -> R
λ> f (D 3 1) (D 4 1) (D 5 1) :: Dual Float' -- call `f` with dual numbers, set derivative to 1
D 29.041077 15.283662
λ> f x y z = (2 * x^2, 3 * y + sin z)       -- f: R^3 -> R^2
λ> f (D 3 1) (D 4 1) (D 5 1) :: (Dual Float', Dual Float')
(D 18.0 12.0,D 11.041076 3.2836623)

Or get partial derivatives by setting only the sensitivities we want as dual numbers:

λ> f x y z = 2 * x^2 + 3 * y + sin z -- f: R^3 -> R
λ> f (D 3 1) 4 5 :: Dual Float'
D 29.041077 12.0

If you want to learn more about how this works, read the paper by Conal M. Elliott2 or watch the talk, titled "Provably correct, asymptotically efficient, higher-order reverse-mode automatic differentiation" by Simon Peyton Jones himself3, or read their paper4 by the same name. There's also a package named ad which implements this in a usable way. This gist is merely to understand the most basic form of it. Additionally, there's Andrej Karpathy's micrograd written in Python.

  1. Only the inverse hyperbolic functions aren't yet implemented in the Floating instance

  2. http://conal.net/papers/beautiful-differentiation/beautiful-differentiation-long.pdf

  3. https://www.youtube.com/watch?v=EPGqzkEZWyw

  4. https://richarde.dev/papers/2022/ad/higher-order-ad.pdf