Image source: Bibliothèque nationale de France
Dear students,
As you know, in February the University announced that all students, staff and faculty will now be offered access to the Google chatbot, “Gemini.” In a series of emails to the Georgetown community, the Interim President and Interim Provost explained that this is part of a larger plan to put “generative AI” (a term which they do not define, but by which they appear to mean simply chatbots and other chatbot-like data products) at the center of a whole range of university activities, from scholarship and curriculum development to the various functions of the administrative bureaucracy.
I was not surprised to learn that Georgetown, like most American colleges and universities, has succumbed to the pressure to appear part of the “AI” in-crowd (and to the temptation of the resources being made available to those in that crowd). But even though everything I have done in my professional life has been in some way based on the expectation that institutions will tend towards corruption, corrosion and capture, when I think about what this particular instance of that phenomenon signifies for you, the students of Georgetown, I feel very sad and angry. And I decided that the best thing to do with that sadness and anger would be to write to you all directly about why this decision by your university, which may seem on the surface to be an example of garden variety corporate thoughtlessness, should disturb you deeply, and provoke you to fight back.
The first thing you should know is that most of the information being funneled to you about the technologies marketed as “AI” is purposely distorted and does not reflect what has actually been established about the tech, the industry that produces it, or its social, political, economic, and environmental impacts. There is a robust and fast-growing body of interdisciplinary research that comprehensively debunks the claims that tech companies — and those who shill for them — make about these products, and exposes the global corporate power grab that these claims are being used to enable.
I hope you will spend some time exploring that literature. But right now I don’t want to talk about the way that the AI industry abuses workers, hordes resources, manipulates markets and elections, controls policymakers, steals from artists, violates civil and human rights laws, colludes with war criminals, exploits poor and vulnerable people, implements mass surveillance, or lies to customers, Congress, its own shareholders and members of the public. If I talk about those things, some of you might still say to yourselves “yeah but that’s capitalism right? That’s nothing new. That’s business. That’s the price of ‘innovation.’” Some very prominent professors at Georgetown have made exactly that argument to me when I have tried to talk to them about all of the above. So, instead of showing you how what’s being done to all of us now in the name of “AI” is actually a grotesque caricature of those sorts of neoliberal aphorisms, I want to talk about something much more basic. I want to talk about the way that the survival of the AI industry depends upon you, the generation coming of age right now, agreeing with them that you have essentially no say in your own future.
One piece of evidence that they view your disempowerment as necessary for their survival is that their entire messaging strategy is to try to persuade you that there is nothing you can do but submit to their domination, and to threaten you with dire consequences if you do not. You cannot spend an hour online without being bombarded by propagandistic articles, videos and ads about the “inevitability” and “unstoppability” of the “AI revolution,” urging you not to be “left behind” or “forced out,” and offering you various products or services which promise to save you from being crushed by the inexorable advancement of “AI systems” which are analogized in one breath to natural disasters and in the next to supernatural beings. “AI is evolving” and “nobody can keep up,” not other industries, not governments, and definitely not regular people like you and everyone you care about.
These claims are ridiculous on their face, entirely unsupported by any factual or scientific evidence, and we would all be laughing at the people making them were it not for the fact that those people have enough money and power to punish, individually and collectively, anyone who dares to do so. But there is a reason that they are using this particular message, rather than any other message, to sell their product (and yes, the correct way to read most New York Times and Washington Post articles about “AI” is as advertising). The reason is that in order to preserve their wealth and power, they need us to be so in the habit of denying our own capacity to resist that we actually do lose the capacity to resist. And that’s actually a sign that, in spite of all the weapons at their disposal, they know their hold over us is weak.
In truth, there are many things that, if enough of us decided to do them, would end the AI industry entirely. Some of these things might involve passing new laws — like laws limiting who can create or have or share or sell what kinds of data about whom, or how much data any one corporation can have and for how long and what kinds of things they can make with it, or how much energy can be spent by anyone to make anything with it, or what kinds of human activities can even be turned into data in the first place. But there are other things we could do just through collective action. We could decide to stop using smart phones and wearables and other forms of technology that are used to track us and sell us things and sell information about us to those who would like to use us for something. We could stop communicating through
corporate social media. We could stop buying things online. We could abandon the current internet and build a new one. I am not saying we should do these things. I am saying that we could and maybe even that it is probable we eventually will. I am saying that, if I were a tech billionaire, I would worry about it. I would worry about it not because, as a tech billionaire, I would believe that the arc of history bends towards justice, but because I would believe that the things we build tend to fall apart. None of this stuff has been around very long in the grand scheme of things. Human beings might well decide that the “digital era” was in fact never on the trajectory of human progress, but was more like a toxic tributary we fell into for a while, and that finding a way forward will require first finding a way back. And even if we don’t decide that, it would be a mistake to think of digital infrastructure as durable across centuries, or of the dominion of the tech bros as durable across generations. This too will end, and we will have to deal with it.
I’m saying “we,” but I really mean you. You are the ones who will be there for what comes next, and that’s why, for the “AI” overlords, you are the biggest liability. If you start thinking about the world you want to create to serve your own good, the good of your community, and the good of your children and grandchildren, instead of thinking about how you can secure safety, comfort, and status in the world the tech companies have built to serve their greed, they will lose all their power. That’s why they are doing everything they can to convince you that you actually do not have the ability to think those thoughts, and that none of the ideas you might have about your own future are ideas that can actually be realized. It’s a big win for them, in their quest to persuade you of your powerlessness, that they have gotten your university to adapt their marketing language for its official statements, to shape its academic programming around the presumption of their indefinite economic primacy, and to pay for you to have free access to technologies that will make it harder — the more you use them — to know yourself to be a free intellectual, creative and moral agent.
But the great thing is, you don’t have to go along with this, and I urge you not to. You can refuse to use the chatbot. You can tell your professors that you don’t want them to use it or to require you to use it. At a minimum, you can demand that they assess the work of those who actually want to do the work themselves differently from the work of those who want the chatbot to do it for them. You can organize against “AI” requirements in degree programs and against data products, surveillance systems, and automation in all aspects of your university experience. You can create student groups dedicated to the rejection of all these things, and to the imagination of what you would like your education to be like instead.
You can advocate for modes of teaching and learning that are not transactional, but transformational. You can push for curricula that do the opposite of what the chatbot does, by putting you into direct unmediated contact with whatever you are studying. Ask your English
professor to show you how to read a great work of literature slowly, carefully, with attention to the details of your own confusion and struggle with the text. Ask for Euclid and Al-Khawarizmi as your math textbooks. Ask for intensive seminars from the world class Georgetown librarians on archival research methodologies, and lobby for course credit for time spent in one museum examining the only existing artifacts from one historical moment about which you want to know. Ask for the restoration of institutional investments in art, music and humanities programs.
I am not suggesting you should resist a chatbot education because I expect you to have a smooth path at Georgetown if you do. It’s definitely risky. And I’m not saying you should resist in spite of the risk. I’m saying you should resist because of the risk. The risk is what helps you remember, at a moment when everyone in authority is (or worse, is pretending to be) suffering from the insane delusion that the future of humanity depends on a computer program that generates probabilistic text strings, that there is something that learning is for, that thinking is for, that work is for, and that you are for, the discovery of which belongs to you. If you start now taking risks that help you remember this, you may become the kind of person who cannot be pushed around by bullies and autocrats, or manipulated by propaganda, or satisfied by a life lived solely for the sake of self-protection and self-enrichment.
I’m taking a risk too by writing you this letter. I don’t have tenure at Georgetown. Right now I don’t even have an academic appointment. I could be fired at any moment for any reason. The existence of the center I run depends entirely on my ability to raise money from philanthropic foundations, many of which have publicly endorsed some version of “AI” investment. I’m not sure how what I have written here will impact the future of the Privacy Center and the urgently important work we do. I’m not confident that many of my colleagues would support me were I to face negative consequences.
But I keep thinking about all of the incredible students I have been so lucky to teach, and to get to know, in my time at Georgetown, and I keep looking around in the hopes that someone who is more powerful than I am, whose job is safer, will — for your sake — speak up about what a shameful and embarrassing capitulation this is, and say something about how it connects to the political collapse we are all living through in this country right now. Maybe that will still happen. In the meantime, I want to remind you that the future is yours, and it is not known — not by you, not by Georgetown, not by Sam Altman or Elon Musk, not by the chatbot, not by me. And that is a beautiful thing.
Here to help if need be,
Emily
Emily Tucker
Executive Director
Center on Privacy & Technology
Georgetown Law