John Addison (1930–2025) died last summer, 2025, at the age of 96. He was my PhD advisor at UC Berkeley, and I count myself extraordinarily lucky to have worked under his guidance. My condolences go out to his family.
When I arrived in Berkeley in 1966, I had no clear idea what I wanted to study. Some fellow Canadian newcomers strongly recommended a logic course taught by a professor named Addison. I went, was immediately captivated, and to this day I regard him as the best instructor I ever had—and my professors at he University of British Columbia (UBC) had already set a very high bar.
I especially loved how precise he was and I try to be equally so. His notational conventions were elegant and economical and I follow them religiously in my own work.
As an undergraduate at UBC I had taken no logic, so I needed special permission to enroll. I went to see Addison, and he agreed to let me in when I told him I had been the top science graduate at UBC. He didn’t have to: I had missed the first-year logic course and knew no proof theory. Nevertheless, his course, which focused mainly on model theory, provided an outstanding introduction to logic. I have always preferred model theory in logic and, correspondingly, semantics in computer science.
Addison’s exams were tough but brilliant, and they always contained a touch of humor. I vividly recall one true/false question: “This statement is false.” It was not an easy choice, because he deducted marks for wrong answers. The optimal strategy was to skip the question altogether—which is what I did—or lose a mark.
I loved the course and ended up tying for the top mark. The following year I eagerly enrolled in his seminar on definability, and it did not disappoint. Early in the seminar I was assigned an unsolved problem involving intricate proofs about the difficulty of defining certain point subsets (of the Baire Space, the set of all infinite sequences of natural numbers). I was completely stuck until Addison suggested I read a paper by Hartley Rogers. That paper solved a parallel problem in number theory, and by using analogical reasoning I eventually found, at the last moment, a similarly simple solution to my original problem.
My analogical solution relied on Addison’s description of continuous functions as continuously operating Turing machines. Given two sets of sequences, I showed that one could determine their relative complexity by means of an infinite game. Looking back, this episode illustrates how much I had already absorbed from Addison: the Turing-machine characterization of continuity, the central role of analogies, and the use of infinite games—the list goes on. In particular, it shows his uncanny ability to identify the technology needed at each crucial stage of a project, even before it was clear exactly how that technology would be applied.
At first I thought my infinite game was just a clever trick for cracking a difficult problem. Addison, however, encouraged me to study the game systematically, and that encouragement launched my PhD research. With his steady support and his crucial technical suggestions, I was eventually able to extract more and more structure until I arrived at a description of the order type of the Borel sets. By the time I finished, my dissertation ran to 350 typed pages.
Throughout this period Addison was remarkably generous with his time. Our meetings were not the standard one hour per week; I remember long discussions that went on for hours, covering not just my research but logic more broadly, including his recollections of conversations with Kurt Gödel. I was extraordinarily fortunate.
He was generous with his connections as well. He made sure I was introduced to the logic luminaries at Berkeley. I even took a seminar from the legendary Alfred Tarski. I will never forget an evening at Addison’s home where I met Kleene (of regular-expression fame) and Church (the inventor of the lambda calculus). Kleene had been Addison’s PhD advisor and Church’s student—and Addison himself was Church’s son-in-law.
In some ways Addison spoiled me: I became accustomed to being treated with respect and patience. He could tease, but he never disparaged me. I would later learn that this attitude is far from universal in academia.
When I left Berkeley, I moved into computer science and did not pursue my work on what are now called “Wadge degrees.” I suspect Addison was somewhat disappointed. Yet in my new career I drew constantly on what I had learned from him. For example, Edward Ashcroft and I designed the dataflow language Lucid. In dataflow programming, the central idea is a filter that transforms an input stream into an output stream. Streams are elements of the Baire space, and filters are precisely Addison’s continuously operating Turing machines.
To this day there is hardly a day when I am not reminded of something Addison taught me. Sometimes it is something small, like “never begin a sentence with a mathematical expression.” Sometimes it is something profound, such as “domains of infinite objects are simpler than domains of finite objects.”
I have supervised many PhD students of my own, and I have tried to emulate Addison’s methods. I have tried to be generous with my time and knowledge, to have high standaards but to never disparage my students, and always to remind them of their distinguished academic ancestry and their remarkable “grand supervisor.”
John W. Addison Jr. is no longer with us, but his spirit lives on in his academic descendants; certainly in me.