What Universities Should Do About AI

8 min read Original article ↗

A lot of university leaders have quietly converged on a comforting story: AI is mostly a writing problem.

That story is psychologically convenient. Writing is visible. Writing is common. Writing is where faculty first felt the ground move under their feet due to AI. But it is also the wrong abstraction, and it pushes institutions toward the wrong fixes (policy memos, detector arms races, writing-course “guidelines”) while the real transformation rolls on.

Treating AI as “writing-tech” is like treating electricity as “better candles.”

Yes, generative AI breaks traditional take-home assessments. Students can produce plausible essays on command. That forces changes in pedagogy.

But the deeper issue is not prose quality or plagiarism detection. The deeper issue is that AI has become a general-purpose interface to knowledge work: coding, data analysis, tutoring, research synthesis, design, simulation, persuasion, workflow automation, and (increasingly) agent-like delegation.

If university leadership frames the AI moment as “how do we integrate AI into writing courses,” the institution ends up optimizing one surface symptom while missing the whole disease.

If you want to understand what is about to hit universities, look one step earlier in the pipeline.

The practical implication is simple: bans do not stop AI use. They mostly push it underground.

My core worry is the one administrators often miss: students can deprive themselves of knowledge while still producing “acceptable work.”

AI lets a student behave like a CEO delegating tasks. But real executives succeed because they understand what to delegate, how to judge outputs, and when a failure is subtle but catastrophic. In an AI-heavy workplace, the value shifts upward from “can you produce text” to “can you specify, supervise, verify, and integrate work.” They have to prepare the assignments for an AI to work on.

A student who uses AI to skip the learning loses the very background knowledge they will need to command AI effectively.

That’s why the skills students need start looking less like “write an essay” and more like “manage an AI collaborator.” They need:

  • domain fundamentals (so they can spot nonsense),

  • metacognition (so they know what they don’t know),

  • verification habits (so they don’t launder errors into decisions),

  • and ethical and privacy judgment (so they don’t leak data or automate harm).

This is not limited to computer science.

So the “expected graduate profile” is changing. Students are increasingly expected to know how to use AI tools productively and responsibly, not just avoid them. Expecting students to not use AI tools means courses can’t teach how to use them correctly.

Another leadership blind spot: many students (and plenty of adults) anthropomorphize these systems. Teens and young adults spend huge time interacting with AI companions, sometimes treating them as conscious agents.

The downstream risks are not speculative. There is now mainstream reporting on severe harms tied to chatbot relationships, including youth mental health crises in the orbit of character-style AI systems. There is also serious coverage of “AI psychosis” as a reported phenomenon, and what it might mean for vulnerable users. People are even falling love with AI systems. While frontier AI systems can easily pass the Turing test and have a kind of general intelligence, they are not self-aware and don’t even think except when prompted.

If a university wants to claim it is preparing students for the real world, then “AI literacy” has to include: what these systems are, what they are not, why they feel human, and how that can mislead people. I have previously written about why existing AI systems (large language models) lack the prerequisite capabilities to be self-aware or to have human-like intelligence. That does not mean that they aren’t incredibly useful and powerful tools.

A lot of institutions are still cycling through:

  • panic about essays,

  • blanket bans that are unenforceable,

  • AI detectors (which are brittle in practice),

  • and faculty-level improvisation with no coherent institutional plan.

This is not strategy. It’s institutional anxiety expressed as PDFs.

A more sophisticated form of reaction is to create shiny new “AI + X” degrees alongside existing programs, without deeply reframing what each degree means in an AI-rich world.

SUNY Buffalo is an important case study because it represents a real investment and a real attempt to respond. Their Department of AI and Society describes an “AI+X model” with a Society Core, a Technology Component, integrative courses, and a cross-major capstone. That’s thoughtful in several ways, and it will help some students. But I still think it’s strategically incomplete if the institution treats “AI+X degrees” as the main answer.

Why?

  • It risks becoming parallel curriculum, rather than transformation of the core.

  • It can imply, unintentionally, that “AI readiness” is for students who opt in, instead of a baseline expectation for everyone.

  • It may encourage rebranding over introspection, when what is needed is brutal clarity about which parts of each degree are now AI-trivial and which parts need to become deeper, more conceptual, more synthetic, more human.

Universities need to rethink every major, not just attach “AI” to some of them.

A comprehensive AI education should exist across nearly every degree at some level. That’s AI literacy.

But universities should also offer AI degrees aimed at creating and maintaining AI systems, not merely using tools. Those programs need to start early and go deeper than a standard CS or data science track can easily accommodate. They cannot just be cobbled together from existing courses or created as “cash grabs” which will cause reputational harm and not serve student’s effectively.

You can see demand for exactly that kind of degree emerging. Reporting on new AI majors and AI colleges describes large enrollment surges, and highlights institutions creating standalone AI programs (not just concentrations). In a subsequent post I’ll describe what I think a good AI degree program would look like.

The right model is probably a two-layer system:

  • AI literacy for all students, integrated into every discipline.

  • Dedicated AI degrees for students who will design, deploy, evaluate, secure, and maintain AI systems in the real world.

Here’s the distinguishing feature of the better approaches: they are top-down, structural, and explicit. They do not outsource “AI strategy” to individual departments, schools, or committees.

Brown appointed Michael Littman as its inaugural Associate Provost for Artificial Intelligence, with a mandate spanning AI development, use, governance, research coordination, educational expansion across disciplines, and operational adoption. Littman is a legitimate AI expert and researcher who is extremely distinguished. This is exactly the kind of move universities need if they want coherence instead of fragmentation. Another pattern I’ve observed is appointing Chief AI Officers in organizations, who lack AI expertise and skills. If a University’s AI faculty don’t respect the person in charge of AI for the university in terms of their AI expertise, it almost surely will lead to bad decisions.

Ohio State’s AI Fluency initiative explicitly targets a world where every graduate must be “bilingual,” fluent in their discipline and in applying AI within it. It includes foundational exposure in required first-year experiences, workshops, a broadly available course (“Unlocking Generative AI”), and published learning outcomes that emphasize concepts, limitations, evaluation, discipline-specific use, and responsible implementation.

This is not “AI in writing.” It is “AI as a baseline competency.”

Purdue has positioned AI competency as a graduation expectation beginning in 2026, tied to partnerships and an explicit “working competency” framing. They also wrap this in an ecosystem story: course catalog, guidelines, toolkits, and industry partnership.

The point is not that every student becomes an AI engineer. The point is that every graduate has a functional baseline.

ASU’s approach emphasizes broad access to institutional-grade tools (ChatGPT Edu and others), projects at scale (hundreds), and an enterprise agreement that emphasizes privacy and separation from training data. They explicitly frame use across teaching, learning, research, and operational efficiency.

That matters because “everyone is using random consumer tools” is a privacy and governance disaster. Central access can enable responsible norms.

If leadership wants a clean target, it’s this:

AI fluency is not the ability to generate text.
AI fluency is the ability to direct AI systems, evaluate outputs, and apply them responsibly in your field.

That implies structural changes:

  • Rework assessment so it measures understanding in an AI-rich environment.

  • Teach verification habits and epistemic humility.

  • Build explicit norms for attribution, privacy, and appropriate use.

  • Create top-down leadership so strategy is coherent, not improvised department by department.

  • Deliver AI literacy across the entire curriculum.

  • Offer deep AI degrees for the students who will build the systems everyone else will use.

Universities that keep treating this as a “writing issue” are going to discover, too late, that they were solving the wrong problem with impressive administrative efficiency. You can read my earlier thoughts about why ignoring AI poses an existential threat to universities and what universities should do to avoid extinction.

Discussion about this post

Ready for more?