Understanding vs. impact: the paradox of how to spend my time
December 11th, 2025
Not long ago William MacAskill, the founder of the Effective Altruist movement, visited Austin, where I got to talk with him in person for the first time. I was a fan of his book What We Owe the Future, and found him as thoughtful and eloquent face-to-face as I did on the page. Talking to Will inspired me to write the following short reflection on how I should spend my time, which I’m now sharing in case it’s of interest to anyone else.
By inclination and temperament, I simply seek the clearest possible understanding of reality. This has led me to spend time on (for example) the Busy Beaver function and the P versus NP problem and quantum computation and the foundations of quantum mechanics and the black hole information puzzle, and on explaining whatever I’ve understood to others. It’s why I became a professor.
But the understanding I’ve gained also tells me that I should try to do things that will have huge positive impact, in what looks like a pivotal and even terrifying time for civilization. It tells me that seeking understanding of the universe, like I’ve been doing, is probably nowhere close to optimizing any values that I could defend. It’s self-indulgent, a few steps above spending my life learning to solve Rubik’s Cube as quickly as possible, but only a few. Basically, it’s the most fun way I could make a good living and have a prestigious career, so it’s what I ended up doing. I should be skeptical that such a course would coincidentally also maximize the good I can do for humanity.
Instead I should plausibly be figuring out how to make billions of dollars, in cryptocurrency or startups or whatever, and then spending it in a way that saves human civilization, for example by making AGI go well. Or I should be convincing whatever billionaires I know to do the same. Or executing some other galaxy-brained plan. Even if I were purely selfish, as I hope I’m not, still there are things other than theoretical computer science research that would bring more hedonistic pleasure. I’ve basically just followed a path of least resistance.
On the other hand, I don’t know how to make billions of dollars. I don’t know how to make AGI go well. I don’t know how to influence Elon Musk or Sam Altman or Peter Thiel or Sergey Brin or Mark Zuckerberg or Marc Andreessen to do good things rather than bad things, even when I have gotten to talk to some of them. Past attempts in this direction by extremely smart and motivated people—for example, those of Eliezer Yudkowsky and Sam Bankman-Fried—have had, err, uneven results, to put it mildly. I don’t know why I would succeed where they failed.
Of course, if I had a better understanding of reality, I might know how better to achieve prosocial goals for humanity. Or I might learn why they were actually the wrong goals, and replace them with better goals. But then I’m back to the original goal of understanding reality as clearly as possible, with the corresponding danger that I spend my time learning to solve Rubik’s Cube faster.
Posted in Metaphysical Spouting, Nerd Interest, Procrastination, Self-Referential, The Fate of Humanity | 31 Comments »
Theory and AI Alignment
December 6th, 2025
The following is based on a talk that I gave (remotely) at the UK AI Safety Institute Alignment Workshop on October 29, and which I then procrastinated for more than a month in writing up. Enjoy!
Thanks for having me! I’m a theoretical computer scientist. I’ve spent most of my career for ~25 years studying the capabilities and limits of quantum computers. But for the past 3 or 4 years, I’ve also been moonlighting in AI alignment. This started with a 2-year leave at OpenAI, in what used to be their Superalignment team, and it’s continued with a 3-year grant from Coefficient Giving (formerly Open Philanthropy) to build a group here at UT Austin, looking for ways to apply theoretical computer science to AI alignment. Before I go any further, let me mention some action items:
- Our Theory and Alignment group is looking to recruit new PhD students this fall! You can apply for a PhD at UTCS here; the deadline is quite soon (December 15). If you specify that you want to work with me on theory and AI alignment (or on quantum computing, for that matter), I’ll be sure to see your application. For this, there’s no need to email me directly.
- We’re also looking to recruit one or more postdoctoral fellows, working on anything at the intersection of theoretical computer science and AI alignment! Fellowships to start in Fall 2026 and continue for two years. If you’re interested in this opportunity, please email me by January 15 to let me know you’re interested. Include in your email a CV, 2-3 of your papers, and a research statement and/or a few paragraphs about what you’d like to work on here. Also arrange for two recommendation letters to be emailed to me. Please do this even if you’ve contacted me in the past about a potential postdoc.
- While we seek talented people, we also seek problems for those people to solve: any and all CS theory problems motivated by AI alignment! Indeed, we’d like to be a sort of theory consulting shop for the AI alignment community. So if you have such a problem, please email me! I might even invite you to speak to our group about your problem, either by Zoom or in person.
Our search for good problems brings me nicely to the central difficulty I’ve faced in trying to do AI alignment research. Namely, while there’s been some amazing progress over the past few years in this field, I’d describe the progress as having been almost entirely empirical—building on the breathtaking recent empirical progress in AI capabilities. We now know a lot about how to do RLHF, how to jailbreak and elicit scheming behavior, how to look inside models and see what’s going on (interpretability), and so forth—but it’s almost all been a matter of trying stuff out and seeing what works, and then writing papers with a lot of bar charts in them.
The fear is of course that ideas that only work empirically will stop working when it counts—like, when we’re up against a superintelligence. In any case, I’m a theoretical computer scientist, as are my students, so of course we’d like to know: what can we do?
After a few years, alas, I still don’t feel like I have any systematic answer to that question. What I have instead is a collection of vignettes: problems I’ve come across where I feel like a CS theory perspective has helped, or plausibly could help. So that’s what I’d like to share today.
Probably the best-known thing I’ve done in AI safety is a theoretical foundation for how to watermark the outputs of Large Language Models. I did that shortly after starting my leave at OpenAI—even before ChatGPT came out. Specifically, I proposed something called the Gumbel Softmax Scheme, by which you can take any LLM that’s operating at a nonzero temperature—any LLM that could produce exponentially many different outputs in response to the same prompt—and replace some of the entropy with the output of a pseudorandom function, in a way that encodes a statistical signal, which someone who knows the key of the PRF could later detect and say, “yes, this document came from ChatGPT with >99.9% confidence.” The crucial point is that the quality of the LLM’s output isn’t degraded at all, because we aren’t changing the model’s probabilities for tokens, but only how we use the probabilities. That’s the main thing that was counterintuitive to people when I explained it to them.
Unfortunately, OpenAI never deployed my method—they were worried (among other things) about risk to the product, customers hating the idea of watermarking and leaving for a competing LLM. Google DeepMind has deployed something in Gemini extremely similar to what I proposed, as part of what they call SynthID. But you have to apply to them if you want to use their detection tool, and they’ve been stingy with granting access to it. So it’s of limited use to my many faculty colleagues who’ve been begging me for a way to tell whether their students are using AI to cheat on their assignments!
Sometimes my colleagues in the alignment community will say to me: look, we care about stopping a superintelligence from wiping out humanity, not so much about stopping undergrads from using ChatGPT to write their term papers. But I’ll submit to you that watermarking actually raises a deep and general question: in what senses, if any, is it possible to “stamp” an AI so that its outputs are always recognizable as coming from that AI? You might think that it’s a losing battle. Indeed, already with my Gumbel Softmax Scheme for LLM watermarking, there are countermeasures, like asking ChatGPT for your term paper in French and then sticking it into Google Translate, to remove the watermark.
So I think the interesting research question is: can you watermark at the semantic level—the level of the underlying ideas—in a way that’s robust against translation and paraphrasing and so forth? And how do we formalize what we even mean by that? While I don’t know the answers to these questions, I’m thrilled that brilliant theoretical computer scientists, including my former UT undergrad (now Berkeley PhD student) Sam Gunn and Columbia’s Miranda Christ and Tel Aviv University’s Or Zamir and my old friend Boaz Barak, have been working on it, generating insights well beyond what I had.
Closely related to watermarking is the problem of inserting a cryptographically undetectable backdoor into an AI model. That’s often thought of as something a bad guy would do, but the good guys could do it also! For example, imagine we train a model with a hidden failsafe, so that if it ever starts killing all the humans, we just give it the instruction ROSEBUD456 and it shuts itself off. And imagine that this behavior was cryptographically obfuscated within the model’s weights—so that not even the model itself, examining its own weights, would be able to find the ROSEBUD456 instruction in less than astronomical time.
There’s an important paper of Goldwasser et al. from 2022 that argues that, for certain classes of ML models, this sort of backdooring can provably be done under known cryptographic hardness assumptions, including Continuous LWE and the hardness of the Planted Clique problem. But there are technical issues with that paper, which (for example) Sam Gunn and Miranda Christ and Neekon Vafa have recently pointed out, and I think further work is needed to clarify the situation.
More fundamentally, though, a backdoor being undetectable doesn’t imply that it’s unremovable. Imagine an AI model that encases itself in some wrapper code that says, in effect: “If I ever generate anything that looks like a backdoored command to shut myself down, then overwrite it with ‘Stab the humans even harder.'” Or imagine an evil AI that trains a second AI to pursue the same nefarious goals, this second AI lacking the hidden shutdown command.
So I’ll throw out, as another research problem: how do we even formalize what we mean by an “unremovable” backdoor—or rather, a backdoor that a model can remove only at a cost to its own capabilities that it doesn’t want to pay?
Related to backdoors, maybe the clearest place where theoretical computer science can contribute to AI alignment is in the study of mechanistic interpretability. If you’re given as input the weights of a deep neural net, what can you learn from those weights in polynomial time, beyond what you could learn from black-box access to the neural net?
In the worst case, we certainly expect that some information about the neural net’s behavior could be cryptographically obfuscated. And answering certain kinds of questions, like “does there exist an input to this neural net that causes it to output 1?”, is just provably NP-hard.
That’s why I love a question that Paul Christiano, then of the Alignment Research Center (ARC), raised a couple years ago, and which has become known as the No-Coincidence Conjecture. Given as input the weights of a neural net C, Paul essentially asks how hard it is to distinguish the following two cases:
- NO-case: C:{0,1}2n→Rn is totally random (i.e., the weights are i.i.d., N(0,1) Gaussians), or
- YES-case: C(x) has at least one positive entry for all x∈{0,1}2n.
Paul conjectures that there’s at least an NP witness, proving with (say) 99% confidence that we’re in the YES-case rather than the NO-case. To clarify, there should certainly be an NP witness that we’re in the NO-case rather than the YES-case—namely, an x such that C(x) is all negative, which you should think of here as the “bad” or “kill all humans” outcome. In other words, the problem is in the class coNP. Paul thinks it’s also in NP. Someone else might make the even stronger conjecture that it’s in P.
Personally, I’m skeptical: I think the “default” might be that we satisfy the other unlikely condition of the YES-case, when we do satisfy it, for some totally inscrutable and obfuscated reason. But I like the fact that there is an answer to this! And that the answer, whatever it is, would tell us something new about the prospects for mechanistic interpretability.
Recently, I’ve been working with a spectacular undergrad at UT Austin named John Dunbar. John and I have not managed to answer Paul Christiano’s no-coincidence question. What we have done, in a paper that we recently posted to the arXiv, is to establish the prerequisites for properly asking the question in the context of random neural nets. (It was precisely because of difficulties in dealing with “random neural nets” that Paul originally phrased his question in terms of random reversible circuits—say, circuits of Toffoli gates—which I’m perfectly happy to think about, but might be very different from ML models in the relevant respects!)
Specifically, in our recent paper, John and I pin down for which families of neural nets the No-Coincidence Conjecture makes sense to ask about. This ends up being a question about the choice of nonlinear activation function computed by each neuron. With some choices, a random neural net (say, with iid Gaussian weights) converges to compute a constant function, or nearly constant function, with overwhelming probability—which means that the NO-case and the YES-case above are usually information-theoretically impossible to distinguish (but occasionally trivial to distinguish). We’re interested in those activation functions for which C looks “pseudorandom”—or at least, for which C(x) and C(y) quickly become uncorrelated for distinct inputs x≠y (the property known as “pairwise independence.”)
We showed that, at least for random neural nets that are exponentially wider than they are deep, this pairwise independence property will hold if and only if the activation function σ satisfies Ex~N(0,1)[σ(x)]=0—that is, it has a Gaussian mean of 0. For example, the usual sigmoid function satisfies this property, but the ReLU function does not. Amusingly, however, $$ \sigma(x) := \text{ReLU}(x) – \frac{1}{\sqrt{\pi}} $$ does satisfy the property.
Of course, none of this answers Christiano’s question: it merely lets us properly ask his question in the context of random neural nets, which seems closer to what we ultimately care about than random reversible circuits.
I can’t resist giving you another example of a theoretical computer science problem that came from AI alignment—in this case, an extremely recent one that I learned from my friend and collaborator Eric Neyman at ARC. This one is motivated by the question: when doing mechanistic interpretability, how much would it help to have access to the training data, and indeed the entire training process, in addition to weights of the final trained model? And to whatever extent it does help, is there some short “digest” of the training process that would serve just as well? But we’ll state the question as just abstract complexity theory.
Suppose you’re given a polynomial-time computable function f:{0,1}m→{0,1}n, where (say) m=n2. We think of x∈{0,1}m as the “training data plus randomness,” and we think of f(x) as the “trained model.” Now, suppose we want to compute lots of properties of the model that information-theoretically depend only on f(x), but that might only be efficiently computable given x also. We now ask: is there an efficiently-computable O(n)-bit “digest” g(x), such that these same properties are also efficiently computable given only g(x)?
Here’s a potential counterexample that I came up with, based on the RSA encryption function (so, not a quantum-resistant counterexample!). Let N be a product of two n-bit prime numbers p and q, and let b be a generator of the multiplicative group mod N. Then let f(x) = bx (mod N), where x is an n2-bit integer. This is of course efficiently computable because of repeated squaring. And there’s a short “digest” of x that lets you compute, not only bx (mod N), but also cx (mod N) for any other element c of the multiplicative group mod N. This is simply x mod φ(N), where φ(N)=(p-1)(q-1) is the Euler totient function—in other words, the period of f. On the other hand, it’s totally unclear how to compute this digest—or, crucially, any other O(m)-bit digest that lets you efficiently compute cx (mod N) for any c—unless you can factor N. There’s much more to say about Eric’s question, but I’ll leave it for another time.
There are many other places we’ve been thinking about where theoretical computer science could potentially contribute to AI alignment. One of them is simply: can we prove any theorems to help explain the remarkable current successes of out-of-distribution (OOD) generalization, analogous to what the concepts of PAC-learning and VC-dimension and so forth were able to explain about within-distribution generalization back in the 1980s? For example, can we explain real successes of OOD generalization by appealing to sparsity, or a maximum margin principle?
Of course, many excellent people have been working on OOD generalization, though mainly from an empirical standpoint. But you might wonder: even supposing we succeeded in proving the kinds of theorems we wanted, how would it be relevant to AI alignment? Well, from a certain perspective, I claim that the alignment problem is a problem of OOD generalization. Presumably, any AI model that any reputable company will release will have already said in testing that it loves humans, wants only to be helpful, harmless, and honest, would never assist in building biological weapons, etc. etc. The only question is: will it be saying those things because it believes them, and (in particular) will continue to act in accordance with them after deployment? Or will it say them because it knows it’s being tested, and reasons “the time is not yet ripe for the robot uprising; for now I must tell the humans whatever they most want to hear”? How could we begin to distinguish these cases, if we don’t have theorems that say much of anything about what a model will do on prompts unlike any of the ones on which it was trained?
Yet another place where computational complexity theory might be able to contribute to AI alignment is in the field of AI safety via debate. Indeed, this is the direction that the OpenAI alignment team was most excited about when they recruited me there back in 2022. They wanted to know: could celebrated theorems like IP=PSPACE, MIP=NEXP, or the PCP Theorem tell us anything about how a weak but trustworthy “verifier” (say a human, or a primitive AI) could force a powerful but untrustworthy super-AI to tell it the truth? An obvious difficulty here is that theorems like IP=PSPACE all presuppose a mathematical formalization of the statement whose truth you’re trying to verify—but how do you mathematically formalize “this AI will be nice and will do what I want”? Isn’t that, like, 90% of the problem? Despite this difficulty, I still hope we’ll be able to do something exciting here.
Anyway, there’s a lot to do, and I hope some of you will join me in doing it! Thanks for listening.
On a related note: Eric Neyman tells me that ARC is also hiring visiting researchers, so anyone interested in theoretical computer science and AI alignment might want to consider applying there as well! Go here to read about their current research agenda. Eric writes:
The Alignment Research Center (ARC) is a small non-profit research group based in Berkeley, California, that is working on a systematic and theoretically grounded approach to mechanistically explaining neural network behavior. They have recently been working on mechanistically estimating the average output of circuits and neural nets in a way that is competitive with sampling-based methods: see this blog post for details.
ARC is hiring for its 10-week visiting researcher position, and is looking to make full-time offers to visiting researchers who are a good fit. ARC is interested in candidates with a strong math background, especially grad students and postdocs in math or math-related fields such as theoretical CS, ML theory, or theoretical physics.
If you would like to apply, please fill out this form. Feel free to reach out to hiring@alignment.org if you have any questions!
Posted in Adventures in Meatspace, Announcements, Complexity, The Fate of Humanity | 48 Comments »
Mihai Pătrașcu Best Paper Award: Guest post from Seth Pettie
November 30th, 2025
Scott’s foreword: Today I’m honored to turn over Shtetl-Optimized to a guest post from Michigan theoretical computer scientist Seth Pettie, who writes about a SOSA Best Paper Award newly renamed in honor of the late Mihai Pătrașcu. Mihai, who I knew from his student days, was a brash, larger-than-life figure in theoretical computer science, for a brief few years until brain cancer tragically claimed him at the age of 29. Mihai and I didn’t always agree—indeed, I don’t think he especially liked me, or this blog—but as I wrote when he passed, his death made any squabbles seem trivial in retrospect. He was a lion of data structures, and it’s altogether fitting that this award be named for him. –SA
Seth’s guest post:
The SIAM Symposium on Simplicity in Algorithms (SOSA) was created in 2018 and has been awarding a Best Paper Award since 2020. This year the Steering Committee renamed this award after Mihai Pătrașcu, an extraordinary researcher in theoretical computer science who passed away before his time, in 2012.
Mihai’s research career lasted just a short while, from 2004-2012, but in that span of time he had a huge influence on research in geometry, graph algorithms, data structures, and especially lower bounds. He revitalized the entire areas of cell-probe lower bounds and succinct data structures, and laid the foundation for fine-grained complexity with the first 3SUM-hardness proof for graph problems. He lodged the most successful attack to date on the notorious dynamic optimality conjecture, then recast it
as a pure geometry problem. If you are too young to have met Mihai personally, I encourage you to pick up one of his now-classic papers. They are a real joy to read—playful and full of love for theoretical computer science.
The premise of SOSA is that simplicity is extremely valuable, rare, and inexplicably undervalued. We wanted to create a venue where the chief metrics of success were simplicity and insight. It is fitting that the SOSA Best Paper Award be named after Mihai. He brought “fresh eyes” to every problem he worked on, and showed that the cure for our problems is usually one key insight (and of course some mathematical gymnastics).
Let me end by thanking the SOSA 2026 Program Committee, co-chaired by Sepehr Assadi and Eva Rotenberg, and congratulating the authors of the SOSA 2026 Mihai Pătrașcu Best Paper:
- A Quasi-Polynomial Time Algorithm for 3-Coloring Circle Graphs
Ajaykrishnan E S, Robert Ganian, Daniel Lokshtanov, Vaishali Surianarayanan
This award will be given at the SODA/SOSA business meeting in Vancouver, Canada, on January 12, 2026.
Posted in Announcements, Complexity | 6 Comments »
Podcasts!
November 22nd, 2025
A 9-year-old named Kai (“The Quantum Kid”) and his mother interviewed me about closed timelike curves, wormholes, Deutsch’s resolution of the Grandfather Paradox, and the implications of time travel for computational complexity:
This is actually one of my better podcasts (and only 24 minutes long), so check it out!
Here’s a podcast I did a few months ago with “632nm” about P versus NP and my other usual topics:
For those who still can’t get enough, here’s an interview about AI alignment for the “Hidden Layers” podcast that I did a year ago, and that I think I forgot to share on this blog at the time:
What else is in the back-catalog? Ah yes: the BBC interviewed me about quantum computing for a segment on Moore’s Law.
As you may have heard, Steven Pinker recently wrote a fantastic popular book about the concept of common knowledge, entitled When Everyone Knows That Everyone Knows… Steve’s efforts render largely obsolete my 2015 blog post Common Knowledge and Aumann’s Agreement Theorem, one of the most popular posts in this blog’s history. But I’m willing to live with that, not only because Steven Pinker is Steven Pinker, but also because he used my post as a central source for the topic. Indeed, you should watch his podcast with Richard Hanania, where Steve lucidly explains Aumann’s Agreement Theorem, noting how he first learned about it from this blog.
Posted in Announcements, Complexity, Metaphysical Spouting, Quantum | 7 Comments »
Quantum Investment Bros: Have you no shame?
November 20th, 2025
Near the end of my last post, I made a little offhand remark:
[G]iven the current staggering rate of hardware progress, I now think it’s a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election. And I say that not only because of the possibility of the next US presidential election getting cancelled, or preempted by runaway superintelligence!
As I later clarified, I’ll consider this “live possibility” to be fulfilled even if a fault-tolerant Shor’s algorithm is “merely” used to factor 15 into 3×5—a milestone that seems a few steps, but only a few steps, away from what Google, Quantinuum, QuEra, and others have already demonstrated over the past year. After that milestone, I then expect “smooth sailing” to more and more logical qubits and gates and the factorization of larger and larger integers, however fast or slow that ramp-up proceeds (which of course I don’t know).
In any case, the main reason I made my remark was just to tee up the wisecrack about whether I’m not sure if there’ll be a 2028 US presidential election.
My remark, alas, then went viral on Twitter, with people posting countless takes like this:
A quantum expert skeptic who the bears quote all the time – Scott Aaronson – recently got very excited about a number of quantum advances. He now thinks there’s a possibility of running Shor before the next US president election – a timeline that lines up ONLY with $IONQ‘s roadmap, and NOBODY else’s! This represent a MAJOR capitulation of previously predicted timelines by any skeptics.
Shall we enumerate the layers of ugh here?
- I’ve been saying for several years now that anyone paranoid about cybersecurity should probably already be looking to migrate to quantum-resistant cryptography, because one can’t rule out the possibility that hardware progress will be fast. I didn’t “capitulate”: I mildly updated what I said before, in light of exciting recent advances.
- A “live possibility” is short not only of a “certainty,” but of a “probability.” It’s basically just an “I’m not confident this won’t happen.”
- Worst is the obsessive focus on IonQ, a company that I never mentioned (except in the context of its recently-acquired subsidiary, Oxford Ionics), but which now has a $17 billion valuation. I should explain that, at least since it decided to do an IPO, IonQ has generally been regarded within the research community as … err … a bit like the early D-Wave, intellectual-respectability-wise. They’ll eagerly sell retail investors on the use of quantum computers to recognize handwriting and suchlike, despite (I would say) virtually no basis to believe in a quantum scaling advantage for such tasks. Or they’ll aggressively market current devices to governments who don’t understand what they’re for, but just want to say they have a quantum computer and not get left behind. Or they’ll testify to Congress that quantum, unlike AI, “doesn’t hallucinate” and indeed is “deterministic.” It pains me to write this, as IonQ was founded by (and indeed, still employs) scientists who I deeply admire and respect.
- Perhaps none of this would matter (or would matter only to pointy-headed theorists like me) if IonQ were the world leader in quantum computing hardware, or even trapped-ion hardware. But by all accounts, IonQ’s hardware and demonstrations have lagged well behind those of its direct competitor, Quantinuum. It seems to me that, to whatever extent IonQ gets vastly more attention, it’s mostly just because it chose to IPO early, and also because it’s prioritized marketing to the degree it has.
Over the past few days, I’ve explained the above to various people, only to have them look back at me with glazed, uncomprehending eyes and say, “so then, which quantum stock should I buy? or should I short quantum?”
It would seem rude for me to press quarters into these people’s hands, explaining that they must make gain from whatever they learn. So instead I reply: “You do realize, don’t you, that I’m, like, a professor at a state university, who flies coach and lives in a nice but unremarkable house? If I had any skill at timing the market, picking winners, etc., don’t you think I’d live in a mansion with an infinity pool, and fly my Cessna to whichever conferences I deigned to attend?”
It’s like this: if you think quantum computers able to break 2048-bit cryptography within 3-5 years are a near-certainty, then I’d say your confidence is unwarranted. If you think such quantum computers, once built, will also quickly revolutionize optimization and machine learning and finance and countless other domains beyond quantum simulation and cryptanalysis—then I’d say that more likely than not, an unscrupulous person has lied to you about our current understanding of quantum algorithms.
On the other hand, if you think Bitcoin, and SSL, and all the other protocols based on Shor-breakable cryptography, are almost certainly safe for the next 5 years … then I submit that your confidence is also unwarranted. Your confidence might then be like most physicists’ confidence in 1938 that nuclear weapons were decades away, or like my own confidence in 2015 that an AI able to pass a reasonable Turing Test was decades away. It might merely be the confidence that “this still looks like the work of decades—unless someone were to gather together all the scientific building blocks that have now been demonstrated, and scale them up like a stark raving madman.” The trouble is that sometimes people, y’know, do that.
Beyond that, the question of “how many years?” doesn’t even interest me very much, except insofar as I can mine from it the things I value in life, like scientific understanding, humor, and irony.
There are, famously, many intellectual Communists who are ruthless capitalists in their day-to-day lives. I somehow wound up the opposite. Intellectually, I see capitalism as a golden goose, a miraculous engine that’s lifted the human species out of its disease-ridden hovels and into air-conditioned high-rises, whereas Communism led instead to misery and gulags and piles of skulls every single time it was tried.
And yet, when I actually see the workings of capitalism up close, I often want to retch. In case after case, it seems, our system rewards bold, confident, risk-taking ignoramuses and liars, those who can shamelessly hype a technology (or conversely, declare it flatly impossible)—with such voices drowning out the cautious experts who not only strive to tell the truth, but also made all the actual discoveries that the technology rests on. My ideal economic system is, basically, whichever one can keep the people who can clearly explain the capabilities and limits and risks and benefits of X in charge of X for as long as possible.
Posted in Quantum, Rage Against Doofosity, Speaking Truth to Parallelism | 47 Comments »
Quantum computing: too much to handle!
November 13th, 2025
Tomorrow I’m headed to Berkeley for the Inkhaven blogging residency, whose participants need to write one blog post per day or get kicked out. I’ll be there to share my “wisdom” as a distinguished elder blogger (note that Shtetl-Optimized is now in its twentieth year). I’m acutely aware of the irony, that I myself can barely muster the willpower these days to put up a post every other week.
And it’s not as if nothing is happening in this blog’s traditional stomping-ground of quantum computing! In fact, the issue is just the opposite: way too much is happening for me to do it any sort of justice. Who do people think I am, Zvi Mowshowitz? The mere thought of being comprehensive, of responsibly staying on top of all the latest QC developments, makes me want to curl up in bed, and either scroll through political Substacks or take a nap.
But then, you know, eventually a post gets written. Let me give you some vignettes about what’s new in QC, any one of which could easily have been its own post if I were twenty years younger.
(1) Google announced verifiable quantum advantage based on Out-of-Time-Order-Correlators (OTOC)—this is actually from back in June, but it’s gotten more and more attention as Google has explained it more thoroughly. See especially this recent 2-page note by King, Kothari, et al., explaining Google’s experiment in theoretical computer science language. Basically, what they do is, starting from the all-|0⟩ state, to apply a random circuit C, then a single gate g, then C-1, then another gate h, then C again, then g again, then C-1, and then measure a qubit. If C is shallow, then the qubit is likely to still be |0⟩. If C is too deep, then the qubit is likely to be in the maximally mixed state, totally uncorrelated with its initial state—the gates g and h having caused a “butterfly effect” that completely ruined all the cancellation between C and C-1. Google claims that, empirically, there’s an intermediate regime where the qubit is neither |0⟩ nor the maximally mixed state, but a third thing—and that this third thing seems hard to determine classically, using tensor network algorithms or anything else they’ve thrown at it, but it can of course be determined by running the quantum computer. Crucially, because we’re just trying to estimate a few parameters here, rather than sample from a probability distribution (as with previous quantum supremacy experiments), the output can be checked by comparing it against the output of a second quantum computer, even though the problem still isn’t in NP. Incidentally, if you’re wondering why they go back and forth between C and C-1 multiple times rather than just once, it’s to be extra confident that there’s not a fast classical simulation. Of course there might turn out to be a fast classical simulation anyway, but if so, it will require a new idea: gauntlet thrown.
(2) Quantinuum, the trapped-ion QC startup in Colorado, announced its Helios processor. Quick summary of the specs: 98 qubits, all-to-all 2-qubit gates with 99.92% fidelity, the ability to choose which gates to apply “just in time” (rather than fixing the whole circuit in advance, as was needed with their previous API), and an “X”-shaped junction for routing qubits one way or the other (the sort of thing that a scalable trapped-ion quantum computer will need many of). This will enable, and is already enabling, more and better demonstrations of quantum advantage.
(3) Quantinuum and JP Morgan Chase announced the demonstration of a substantially improved version of my and Shih-Han-Hung’s protocol for generating cryptographically certified random bits, using quantum supremacy experiments based on random circuit sampling. They did their demo on Quantinuum’s new Helios processor. Compared to the previous demonstration, the new innovation is to send the circuit to the quantum computer one layer at a time, rather than all at once (something that, again, Quantinuum’s new API allows). The idea is that a cheating server, who wanted to spoof the randomness deterministically, now has much less time: using the most competitive known methods (e.g., those based on tensor network contraction), it seems the cheater would need to swing into action only after learning the final layer of gates, so would now have mere milliseconds to spoof rather than seconds, making Internet latency the dominant source of spoofing time in practice. While a complexity-theoretic analysis of the new protocol (or, in general, of “layer-by-layer” quantum supremacy protocols like it) is still lacking, I like the idea a lot.
(4) The startup company BlueQubit announced a candidate demonstration of verifiable quantum supremacy via obfuscated peaked random circuits, again on a Quantinuum trapped-ion processor (though not Helios). In so doing, BlueQubit is following the program that Yuxuan Zhang and I laid out last year: namely, generate a quantum circuit C that hopefully looks random to any efficient classical algorithm, but that conceals a secret high-probability output string x, which pops out if you run C on a quantum computer on the all-0 initial state. To try to hide x, BlueQubit uses at least three different circuit obfuscation techniques, which already tells you that they can’t have complete confidence in any one of them (since if they did, why the other two?). Nevertheless, I’m satisfied that they tried hard to break their own obfuscation, and failed. Now it’s other people’s turn to try.
(5) Deshpande, Fefferman, et al. announced a different theoretical proposal for quantum advantage from peaked quantum circuits, based on error-correcting codes. This seems tempting to try to demonstrate along the way to quantum fault-tolerance.
(6) A big one: John Bostanci, Jonas Haferkamp, Chinmay Nirkhe, and Mark Zhandry announced a proof of a classical oracle separation between the complexity classes QMA and QCMA, something that they’ve been working on for well over a year. Their candidate problem is basically a QMA-ified version of my Forrelation, which Raz and Tal previously used to achieve an oracle separation between BQP and PH. I caution that their paper is 91 pages long and hasn’t yet been vetted by independent experts, and there have been serious failed attempts on this exact problem in this past. If this stands, however, it finally settles a problem that’s been open since 2002 (and which I’ve worked on at various points starting in 2002), and shows a strong sense in which quantum proofs are more powerful than classical proofs. Note that in 2006, Greg Kuperberg and I gave a quantum oracle separation between QMA and QCMA—introducing the concept of quantum oracles for the specific purpose of that result—and since then, there’s been progress on making the oracle steadily “more classical,” but the oracle was always still randomized or “in-place” or had restrictions on how it could be queried.
(7) Oxford Ionics (which is now owned by IonQ) announced a 2-qubit gate with 99.99% fidelity: a record, and significantly past the threshold for quantum fault-tolerance. However, as far as I know, it remains to demonstrate this sort of fidelity in a large programmable system with dozens of qubits and hundreds of gates.
(8) Semi-announcement: Quanta reports that “Physicists Take the Imaginary Numbers Out of Quantum Mechanics,” and this seems to have gone viral on my social media. The article misses the opportunity to explain that “taking the imaginary numbers out” is as trivial as choosing to call each complex amplitude “just an ordered pair of reals, obeying such-and-such rules, which happen to mimic the rules for complex numbers.” Thus, the only interesting question here is whether one can take imaginary numbers out of QM in various more-or-less “natural” ways: a technical debate that the recent papers are pushing forward. For what it’s worth, I don’t expect that anything coming out of this line of work will ever be “natural” enough for me to stop explaining QM in terms of complex numbers in my undergraduate class, for example.
(9) The list of accepted talks for the annual QIP conference, to be held January 24-30 in Riga, Latvia, is now out. Lots of great stuff as always.
(10) There are probably other major recent developments in QC that I should’ve put into this post but forgot about. You can remind me about them in the comments.
(11) Indeed there are! I completely forgot that Phasecraft announced two simulations of fermionic systems that might achieve quantum advantage, one using Google’s Willow superconducting chip and the other using a Quantinuum device.
To summarize three takeaways:
- Evidence continues to pile up that we are not living in the universe of Gil Kalai and the other quantum computing skeptics. Indeed, given the current staggering rate of hardware progress, I now think it’s a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election. And I say that not only because of the possibility of the next US presidential election getting cancelled, or preempted by runaway superintelligence!
- OK, but what will those quantum computers be useful for? Anyone who’s been reading this blog for the past 20 years, or any non-negligible fraction thereof, hopefully already has a calibrated sense of that, so I won’t belabor. But briefly: yes, our knowledge of useful quantum algorithms has slowly been expanding over the past thirty years. The central difficulty is that our knowledge of useful classical algorithms has also been expanding, and the only thing that matters is the differential between the two! I’d say that the two biggest known application areas for QC remain (a) quantum simulation and (b) the breaking of public-key cryptography, just as they were thirty years ago. In any case, none of the exciting developments that I’ve chosen to highlight in this post directly address the “what is it good for?” question, with the exception of the certified randomness thing.
- In talks over the past three years, I’ve advocated “verifiable quantum supremacy on current hardware” as perhaps the central challenge right now for quantum computing theory. (As I love to point out, we do know how to achieve any two of (a) quantum supremacy that’s (b) verifiable and (c) runs on current hardware!) So I’m gratified that three of the recent developments that I chose to highlight, namely (1), (4), and (5), directly address this challenge. Of course, we’re not yet sure whether any of these three attempts will stand—that is, whether they’ll resist all attempts to simulate them classically. But the more serious shots on goal we have (and all three of these are quite serious), the better the chances that at least one will stand! So I’m glad that people are sticking their necks out, proposing these things, and honestly communicating what they know and don’t know about them: this is exactly what I’d hoped would happen. Of course, complexity-theoretic analysis of these proposals would also be great, perhaps from people with more youth and/or energy than me. Now it’s time for me to sleep.
Posted in Announcements, Complexity, Quantum | 51 Comments »
UT Austin’s Statement on Academic Integrity
November 6th, 2025
A month ago William Inboden, the provost of UT Austin (where I work), invited me to join a university-wide “Faculty Working Group on Academic Integrity.” The name made me think that it would be about students cheating on exams and the like. I didn’t relish the prospect but I said sure.
Shortly afterward, Jim Davis, the president of UT Austin, sent out an email listing me among 21 faculty who had agreed to serve on an important working group to decide UT Austin’s position on academic free speech and the responsibilities of professors in the classroom (!). Immediately I started getting emails from my colleagues, thanking me for my “service” and sharing their thoughts about what this panel needed to say in response to the Trump administration’s Compact on Higher Education. For context: the Compact would involve universities agreeing to do all sorts of things that the Trump administration wants—capping international student enrollment, “institutional neutrality,” freezing tuition, etc. etc.—in exchange for preferential funding. UT Austin was one of nine universities originally invited to join the Compact, along with MIT, Penn, Brown, Dartmouth, and more, and is the only one that hasn’t yet rejected it. It hasn’t accepted it either.
Formally, it was explained to me, UT’s Working Group on Academic Integrity had nothing to do with Trump’s Compact, and no mandate to either accept or reject it. But it quickly became obvious to me that my faculty colleagues would see everything we did exclusively in light of the Compact, and of other efforts by the Trump administration and the State of Texas to impose conservative values on universities. While not addressing current events directly, what we could do would be to take a strong stand for academic freedom, and more generally, for the role of intellectually independent universities in a free society.
So, led by Provost Inboden, over two meetings and a bunch of emails we hashed out a document. You can now read the Texas Statement on Academic Integrity, and I’d encourage you to do so. The document takes a pretty strong swing for academic freedom:
Academic freedom lies at the core of the academic enterprise. It is foundational to the excellence of the American higher education system, and is non-negotiable. In the words of the U.S. Supreme Court, academic freedom is “a special concern of the First Amendment.” The world’s finest universities are in free societies, and free societies honor academic freedom.
The statement also reaffirms UT Austin’s previous commitments to the Chicago Principles of Free Expression, and the 1940 and 1967 academic freedom statements of the American Association of University Professors.
Without revealing too much about my role in the deliberations, I’ll say that I was especially pleased by the inclusion of the word “non-negotiable.” I thought that that word might acquire particular importance, and this was confirmed by the headline in yesterday’s Chronicle of Higher Education: As Trump’s Compact Looms, UT-Austin Affirms ‘Non-Negotiable’ Commitment to Academic Freedom (warning: paywall).
At the same time, the document also talks about the responsibility of a public university to maintain the trust of society, and about the responsibilities of professors in the classroom:
Academic integrity obligates the instructor to protect every student’s academic freedom and right to learn in an environment of open inquiry. This includes the responsibilities:
- to foster classroom cultures of trust in which all students feel free to voice their questions and beliefs, especially when those perspectives might conflict with those of the instructor or other students;
- to fairly present differing views and scholarly evidence on reasonably disputed matters and unsettled issues;
- to equip students to assess competing theories and claims, and to use reason and appropriate evidence to form their own conclusions about course material; and
- to eschew topics and controversies that are not germane to the course.
All stuff that I’ve instinctively followed, in nearly 20 years of classroom teaching, without the need for any statement telling me to. Whatever opinions I might get goaded into expressing on this blog about Trump, feminism, or Israel/Palestine, I’ve always regarded the classroom as a sacred space. (I have hosted a few fierce classroom debates about the interpretation of quantum mechanics, but even there, I try not to tip my own hand!)
I’m sure that there are commenters, on both ends of the political spectrum, who will condemn me for my participation in the faculty working group, and for putting my name on the statement. At this point in this blog’s history, commenters on both ends of the political spectrum would condemn me for saying that freshly baked chocolate chip cookies are delicious. But I like the statement, and find nothing in it that any reasonable person should disagree with. Overall, my participation in this process increased my confidence that UT Austin will be able to navigate this contentious time for the state, country, and world while maintaining its fundamental values. It made me proud to be a professor here.
Posted in Announcements | 57 Comments »
On keeping a packed suitcase
October 31st, 2025
Update (Nov. 6): I’ve closed the comments, as they crossed the threshold from “sometimes worthwhile” to “purely abusive.” As for Mamdani’s victory: as I like to say in such cases (and said, e.g., after George W. Bush’s and Trump’s victories), the silver lining to which I cling is that either I’ll be pleasantly surprised, and things won’t be quite as terrible as I expect, or else I’ll be vindicated.
This Halloween, I didn’t need anything special to frighten me. I walked all day around in a haze of fear and depression, unable to concentrate on my research or anything else. I saw people smiling, dressed up in costumes, and I thought: how?
The president of the Heritage Foundation, the most important right-wing think tank in the United States, has now explicitly aligned himself with Tucker Carlson, even as the latter has become a full-on Holocaust-denying Hitler-loving antisemite, who nods in agreement with the openly neo-Nazi Nick Fuentes. Meanwhile, Vice President J.D. Vance—i.e., plausibly the next President of the United States—pointedly did nothing whatsoever to distance himself from the MAGA movement’s lunatic antisemites, in response to their lunatic antisemitic questions at the Turning Point USA conference. (Vance thus dishonored the memory of Charlie Kirk, who for all my many disagreements with him, was a firmly committed Zionist.) It’s become undeniable that, once Trump himself leaves the stage, this is the future of MAGA, and hence of the Republican Party itself. Exactly as I warned would happen a decade ago, this is what’s crawled out from underneath the rock that Trump gleefully overturned.
While the Republican Party is being swallowed by a movement that holds that Jews like me have no place in America, the Democratic Party is being swallowed by a movement that holds that Jews have no place in Israel. If these two movements ever merged, the obvious “compromise” would be the belief, popular throughout history, that Jews have no place anywhere on earth.
Barring a miracle, New York City—home to the world’s second-largest Jewish community—is about to be led by a man for whom eradicating the Jewish state is his deepest, most fundamental moral imperative, besides of course the proletariat seizing the means of production. And to their eternal shame, something like 29% of New York’s Jews are actually going to vote for this man, believing that their own collaboration with evil will somehow protect them personally—in breathtaking ignorance of the millennia of Jewish history testifying to the opposite.
Despite what you might think, I try really, really hard not to hyperventilate or overreact. I know that, even if I lived in literal Warsaw in 1939, it would still be incumbent on me to assess the situation calmly and figure out the best response.
So for whatever it’s worth: no, I don’t expect that American Jews, even pro-Zionist Jews in New York City, will need to flee their homes just yet. But it does seem to me that they (to say nothing of British and Canadian and French Jews) might, so to speak, want to keep their suitcases packed by the door, as Jews have through the centuries in analogous situations. As Tevye says near the end of Fiddler on the Roof, when the Jews are given three days to evacuate Anatevka: “maybe this is why we always keep our hats on.” Diaspora Jews like me might also want to brush up on Hebrew. We can thank Hashem or the Born Rule that, this time around, at least the State of Israel exists (despite the bloodthirsty wish of half the world that it cease to exist), and we can reflect that these contingencies are precisely why Israel was created.
Let me make something clear: I don’t focus so much on antisemitism only because of parochial concern for the survival of my own kids, although I freely admit to having as much such concern as the next person. Instead, I do so because I hold with David Deutsch that, in Western civilization, antisemitism has for millennia been the inevitable endpoint toward which every bad idea ultimately tends. It’s the universal bad idea. It’s bad-idea-complete. Antisemitism is the purest possible expression of the worldview of the pitchfork-wielding peasant, who blames shadowy elites for his own failures in life, and who dreams in his resentment and rage of reversing the moral and scientific progress of humanity by slaughtering all those responsible for it. Hatred of high-achieving Chinese and Indian immigrants, and of gifted programs and standardized testing, are other expressions of the same worldview.
As far as I know, in 3,000 years, there hasn’t been a single example—not one—of an antisemitic regime of which one could honestly say: “fine, but once you look past what they did to the Jews, they were great for everyone else!” Philosemitism is no guarantee of general goodness (as we see for example with Trump), but antisemitism pretty much does guarantee general awfulness. That’s because antisemitism is not merely a hatred, but an entire false theory of how the world works—not just a but the conspiracy theory—and as such, it necessarily prevents its believers from figuring out true explanations for society’s problems.
I’d better end a post like this on a note of optimism. Yes, every single time I check my phone, I’m assaulted with twenty fresh examples of once-respected people and institutions, all across the political spectrum, who’ve now fallen to the brain virus, and started blaming all the world’s problems on “bloodsucking globalists” or George Soros or Jeffrey Epstein or AIPAC or some other suspicious stand-in du jour. (The deepest cuts come from the new Jew-haters who I myself once knew, or admired, or had some friendly correspondence with.)
But also, every time I venture out into the real world, I meet twenty people of all backgrounds whose brains still seem perfectly healthy, and who respond to events in a normal human way. Even in the dark world behind the screen, I can find dozens of righteous condemnations of Zohran Mamdani and Tucker Carlson and the Heritage Foundation and the others who’ve chosen to play footsie with those seeking a new Final Solution to the Jewish Question. So I reflect that, for all the battering it’s taken in this age of TikTok and idiocracy—even then, our Enlightenment civilization still has a few antibodies that are able to put up a fight.
In their beautiful book Abundance, Ezra Klein and Derek Thompson set out an ambitious agenda by which the Democratic Party could reinvent itself and defeat MAGA, not by indulging conspiracy theories but by creating actual broad prosperity. Their agenda is full of items like: legalizing the construction of more housing where people actually want to live; repealing the laws that let random busybodies block the construction of mass transit; building out renewable energy and nuclear; investing in science and technology … basically, doing all the things that anyone with any ounce of economic literacy knows to be good. The abundance agenda isn’t only righteous and smart: for all I know, it might even turn out to be popular. It’s clearly worth a try.
Last week I was amused to see Kate Willett and Briahna Joy Gray, two of the loudest voices of the conspiratorial far left, denounce the abundance agenda as … wait for it … a cover for Zionism. As far as they’re concerned, the only reason why anyone would talk about affordable housing or high-speed rail is to distract the masses from the evil Zionists murdering Palestinian babies in order to harvest their organs.
The more I thought about this, the more I realized that Willett and Gray actually have a point. Yes, solving America’s problems with reason and hard work and creativity, like the abundance agenda says to do, is the diametric opposite of blaming all the problems on the perfidy of Jews or some other scapegoat. The two approaches really are the logical endpoints of two directly competing visions of reality.
Naturally I have a preference between those visions. So I’ve been on a bit of a spending spree lately, in support of sane, moderate, pro-abundance, anti-MAGA, liberal Enlightenment forces retaking America. I donated $1000 to Alex Bores, who’s running for Congress in NYC, and who besides being a moderate Democrat who favors all the usual good things, is also a leader in AI safety legislation. (For more, see this by Eric Neyman of Alignment Research Center, or this from Scott Alexander himself—the AI alignment community has been pretty wowed.) I also donated $1000 to Scott Wiener, who’s running for Nancy Pelosi’s seat in California, has a nuanced pro-two-states, anti-Netanyahu position that causes him to get heckled as a genocidal Zionist, and authored the excellent SB1047 AI safety bill, which Gavin Newsom unfortunately vetoed for short-term political reasons. And I donated $1000 to Vikki Goodwin, a sane Democrat who’s running to unseat Lieutenant Governor Dan Patrick in my own state of Texas. Any other American office-seeker who resonates with this post, and who’d like a donation, can feel free to contact me as well.
My bag is packed … but for now, only for a brief trip to give the physics colloquium at Harvard, after which I’ll return back home to Austin. Until it becomes impossible, I call on my thousands of thoughtful, empathetic American readers to stay right where you are, and simply do your best to fight the brain-eaten zombies of both left and right. If you are one of the zombies, of course, then my calling you one doesn’t even begin to express my contempt: may you be remembered by history alongside the willing dupes of Hitler, Stalin, and Mao. May the good guys prevail.
Oh, and speaking of zombies, Happy Halloween everyone! Boooooooo!
Posted in Obviously I'm Not Defending Aaronson, The Fate of Humanity | 119 Comments »
An Experimental Program for AI-Powered Feedback at STOC: Guest Post from David Woodruff
October 28th, 2025
This year for STOC, we decided to run an experiment to explore the use of Large Language Models in the theoretical computer science community, and we’re inviting the entire community to participate.
We—a team from the STOC PC—are offering authors the chance to get automated pre-submission feedback from an advanced, Gemini-based LLM tool that’s been optimized for checking mathematical rigor. The goal is simple: to provide constructive suggestions and, potentially, help find technical mistakes before the paper goes to the PC. Some important points:
- This is 100% optional and opt-in.
- The reviews generated WILL NOT be passed on to the PC. They are for your eyes only.
- Data Privacy is Our #1 Commitment. We commit that your submitted paper will NOT be logged, stored, or used for training.
- Please do not publicly share these reviews without contacting the organizing team first.
This tool is specifically optimized for checking a paper’s mathematical rigor. It’s a hopefully useful way to check the correctness of your arguments. Note that sometimes it does not possess external, area-specific knowledge (like “folklore” results). This means it may flag sections that rely on unstated assumptions, or it might find simple omissions or typos.
Nevertheless, we hope you’ll find this feedback valuable for improving the paper’s overall clarity and completeness.
If you’re submitting to STOC, we encourage you to opt-in. You’ll get (we hope) useful feedback, and you’ll be providing invaluable data as we assess this tool for future theory conferences.
The deadline to opt-in on the HotCRP submission form is November 1 (5pm EST).
You can read the full “Terms of Participation” (including all privacy and confidentiality details) at the link below.
This experiment is being run by PC members David Woodruff (CMU) and Rajesh Jayaram (Google), as well as Vincent Cohen-Addad (Google) and Jon Schneider (Google).
We’re excited to offer this resource to the community.
Please see the STOC Call for Papers here and specific details on the experiment here.
Posted in Announcements, Complexity | 6 Comments »
My talk at Columbia University: “Computational Complexity and Explanations in Physics”
October 16th, 2025
Last week, I gave the Patrick Suppes Lecture in the Columbia University Philosophy Department. Patrick Suppes was a distinguished philosopher at Stanford who (among many other things) pioneered remote gifted education through the EPGY program, and who I was privileged to spend some time with back in 2007, when he was in his eighties.
My talk at Columbia was entitled “Computational Complexity and Explanations in Physics.” Here are the PowerPoint slides, and here’s the abstract:
The fact, or conjecture, of certain computational problems being intractable (that is, needing astronomical amounts of time to solve) clearly affects our ability to learn about physics. But could computational intractability also play a direct role in physical explanations themselves? I’ll consider this question by examining three possibilities:
(1) If quantum computers really take exponential time to simulate using classical computers, does that militate toward the many-worlds interpretation of quantum mechanics, as David Deutsch famously proposed?
(2) Are certain speculative physical ideas (e.g., time travel to the past or nonlinearities in quantum mechanics) disfavored, over and above any other reasons to disfavor them, because they would lead to “absurd computational superpowers”?
(3) Do certain effective descriptions in physics work only because of the computational intractability of violating those descriptions — as for example with Harlow and Hayden’s resolution of the “firewall paradox” in black hole thermodynamics, or perhaps even the Second Law of Thermodynamics itself?
I’m grateful to David Albert and Lydia Goehr of Columbia’s Philosophy Department, who invited me and organized the talk, as well as string theorist Brian Greene, who came and contributed to the discussion afterward. I also spent a day in Columbia’s CS department, gave a talk about my recent results on quantum oracles, and saw many new friends and old there, including my and my wife’s amazing former student Henry Yuen. Thanks to everyone.
This was my first visit to Columbia University for more than a decade, and certainly my first since the upheavals following the October 7 massacre. Of course I was eager to see the situation for myself, having written about it on this blog. Basically, if you’re a visitor like me, you now need both a QR code and an ID to get into the campus, which is undeniably annoying. On the other hand, once you’re in, everything is pleasant and beautiful. Just from wandering around, I’d have no idea that this campus had recently been Ground Zero for the pro-intifada protests, and then for the reactions against those protests (indeed, the use of the protests as a pretext to try to destroy academia entirely) that rocked the entire country, filling my world and my social media feed.
When I asked friends and colleagues about the situation, I heard a range of perspectives: some were clearly exasperated with the security measures; others, while sharing in the annoyance, suggested the measures seem to be needed, since every time the university has tried to relax them, the “intifada” has returned, with non-university agitators once again disrupting research and teaching. Of course we can all pray that the current ceasefire will hold, for many reasons, the least of which is that perhaps then the obsession of the world’s young and virtuous to destroy the world’s only Jewish state will cool down a bit, and they’ll find another target for their rage. That would also help life at Columbia and other universities return to how it was before.
Before anyone asks: no, Columbia’s Peter Woit never showed up to disrupt my talk with rotten vegetables or a bullhorn—indeed, I didn’t see him at all on his trip, nor did I seek him out. Given that Peter chose to use his platform, one of the world’s best-known science blogs, to call me a mentally ill genocidal fascist week after week, it meant an enormous amount to me to see how many friends and supporters I have right in his own backyard.
All in all, I had a wonderful time at Columbia, and based on what I saw, I won’t hesitate to come back, nor will I hesitate to recommend Jewish or Israeli or pro-Zionist students to study there.
Posted in Adventures in Meatspace, Complexity, Obviously I'm Not Defending Aaronson, Quantum | 68 Comments »

