I don’t like A.I., and I am raising my children not to like it. I’ve been telling them for years now that chatbots are manipulative and dangerous, that A.I. image generators are loosening our collective grip on reality, that large language models are built atop industrial-scale intellectual-property theft. At times, I find myself speaking with my kids about A.I. in the same terms that we might discuss a creepy neighbor who lives down the block: avoid eye contact, cross the street when you walk past his house, and, when in doubt, call on a trusted adult. Yes, I, too, have suspected that the creepy neighbor walks on cloven hooves inside his Yeezy Boosts, but he probably isn’t going anywhere—in fact, he keeps buying up properties around town—so just try your best not to engage.
Somehow, I was not prepared for the creepy neighbor to start hanging around my kids’ schools; somehow, I thought we had until high school. In February, my son, who is in third grade at a public K-5 in Massachusetts, came home with a piece of paper in his backpack that read “Certificate of Completion,” for “demonstrating an understanding of the basic concepts of Artificial Intelligence.” He and his classmates had earned this honor, I learned, by playing a computer game produced by the nonprofit Code.org in partnership with Amazon Future Engineer, called Mix & Move with AI, in which the student “designs” a cartoon dancer and “remixes” a popular song—available, needless to say, on Amazon Music. The game is an inane drag-and-drop affair that has little to do with A.I.; the certificate, it turned out, was merely a memento of a pointless and deceptive branding exercise.
Then, in March, students at my eleven-year-old daughter’s public middle school began receiving new Google Chromebooks, and that is when I heard the tap-tap of the cloven hooves approaching our doorstep. The Chromebooks, which the students use in every class and for homework, came pre-installed with an all-ages version of Gemini, a suite of A.I. tools. When my daughter, who is in sixth grade, begins writing an essay, she gets a prompt: “Help me write.” If she is starting work on a slide-show presentation, the prompt is “Help me visualize.” She shoos away these interruptions, but they persist: “Help me edit.” “Beautify this slide.” The image generator is there, if she’d ever wish to pull the plug on her imagination. The Gemini chatbot is there, if she ever wants to talk to no one.
So many times, so many times, I warned her about the creepy neighbor. Now he reads her poems and knows her passwords. He’s always watching through the screen.
No single company has a monopoly on A.I. in K-8 education. In Boston’s public schools, sixth graders used chatbots powered by OpenAI’s ChatGPT and Anthropic’s Claude to prepare for this year’s statewide standardized tests. In New York’s and Los Angeles’s school districts, among others, kindergartners talk to a gamified reading bot called Amira, which records children’s voices in order to provide A.I.-driven feedback. A public-school parent in Brooklyn told me about a second-grade art class in which the students can cook up A.I. slop using Adobe Express for Education. When a group of fourth graders in Los Angeles used the same Adobe program to design a Pippi Longstocking book cover, it spat out highly sexualized images.
Google has an institutional advantage over its A.I. competitors in the form of the Chromebook and its built-in “learning management system,” Google Classroom. During the COVID-19 pandemic, as school districts scrambled to set up remote learning, many of them found a cheap and easy option in the Chromebook, which strikes me as little more than a slow browser connected to a janky trackpad. A report by the U.S. Public Interest Research Group noted that, by the last quarter of 2020, year-on-year sales of the device were up by two hundred and eighty-seven per cent. In a national survey conducted by the Times last November, about eighty per cent of K-12 teachers said that their districts use Chromebooks, which has created a vast captive market for Gemini and helped make A.I. in schools a near-universal prospect.
Support for generative A.I. in elementary and middle schools clusters around the belief that early exposure to the technology will foster digital-media literacy, give students a foundation in engineering concepts, and prepare them for a future in which most professions are steeped in A.I. Proponents say that teachers can use A.I. to save time on grading papers and tedious administrative tasks; they also tout the adaptive-learning aspects of A.I. tools, which adjust in real time to a child’s progress and, by producing troves of data, help teachers give individualized attention to each student. “One of the core things that we think about when we bring A.I. to education institutions is: How do you put the educator at the center of that experience?” Shantanu Sinha, who is one of the V.P.s of Google for Education, told me. Gemini’s aim, Sinha went on, is to “empower the educators” in “creating richer experiences. We are not the pedagogical experts.”
Other advocates suggest that A.I. might eliminate the need for pedagogical expertise altogether. Alpha, a fast-growing private-school chain that employs “guides” instead of teachers and serves children as young as four, claims that it “harnesses the power of AI technology to provide each student with personalized 1:1 learning,” allowing kids to “crush academics in just 2 hours” per day, according to its website. At a recent White House summit on children and tech, Melania Trump appeared alongside Figure 03, a humanoid contraption by the robotics company Figure AI, which looks, sounds, and moves as if Eve from “WALL-E” had mated with an arthritic Imperial Stormtrooper. The First Lady asked her audience to imagine such an A.I.-powered robot as a teacher, one who is “always patient and always available” to its student. This lucky pupil will learn more quickly and have more time for friends and sports, Trump said; he or she will become “a more complete person.” Figure 03’s face is literally a black screen: a robotic balaclava.
The message from the White House—and, often, from tech companies and public schools—is that Figure 03 and its A.I. militia are irreversibly here, and belong everywhere, and we should feel terrified but also “empowered,” and that the more time and resources we hand over to them the less they will hurt us, hopefully, maybe. Last month, New York City’s Department of Education began soliciting public feedback on its preliminary guidelines for using A.I. in K-12 classrooms, which include this admonishment: “The question is not whether AI belongs in schools. The question is whether we will collectively build a system that governs AI to serve every student and every stakeholder.”
It’s quite the rhetorical suplex—opening a debate by declaring its central premise off limits. But, as we know from hallucinating chatbots, saying something doesn’t make it so. Countless studies have sown doubt about the place of A.I. in pedagogical settings. “The integration of LLMs into learning environments,” a 2025 study out of M.I.T. cautioned, “may inadvertently contribute to cognitive atrophy.” (The authors appended an F.A.Q. to the paper with instructions on how to discuss its findings: “Please do not use the words like ‘stupid’, ‘dumb’, ‘brain rot’, ‘harm’, ‘damage’, ‘brain damage’, ‘passivity’, ‘trimming’ and so on.”)
More recently, Education Week published findings from an analysis of data from some thirteen hundred U.S. school districts, which found that about one in five student interactions with generative A.I. “involved cheating, self-harm, bullying, and other problematic behaviors.” This month, a study by researchers from M.I.T., Carnegie Mellon, U.C.L.A., and the University of Oxford showed that people who used L.L.M.s on fraction-solving math problems and then lost access to A.I. assistance “perform significantly worse without AI and are more likely to give up. . . . These findings are particularly concerning because persistence is foundational to skill acquisition and is one of the strongest predictors of long-term learning.” (This research has not yet been peer-reviewed or published in a scientific journal.) And, at the start of the year, the Brookings Institution released a “premortem on AI and children’s education,” which paired analysis of about four hundred research studies with hundreds of interviews with students, parents, educators, and technologists, and concluded that A.I. tools “undermine children’s foundational development.”
The main arguments against the use of generative A.I. in children’s education are threefold. The first is that L.L.M.s encourage cognitive offloading before kids have done much cognitive onloading—that is, if these tools cause atrophy of thought in adults, then we can scarcely overestimate the potential effects on a brain that has not developed those cognitive muscles in the first place.
The second is that chatbots, which mimic emotional intimacy and tend toward sycophancy, warp how children forge their selfhood and relationships. Around age ten or eleven, kids are “suddenly developing more sophisticated relationships and social hierarchies,” Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill, told me. “A lot of that can be traced back to surging oxytocin and dopamine receptors. Oxytocin makes us want to bond with peers, and dopamine makes it feel good when we get positive feedback.” When a fawning L.L.M. enters the chat, “it’s hijacking the biological tendency to want peer feedback,” Prinstein said. Tweens do a lot of mutual emotional disclosure in the normal course of growing up, he went on, “but if they’re going to a chatbot, they miss out on practicing skills that we use for the rest of our lives.”
The third complaint against the use of A.I. in schools is that it confuses ends and means, privileging the most efficient route to the correct answer, the crispest thesis statement, or the neatest drawing over the messier and less quantifiable process of building a thinking, feeling person. “We are potentially undermining complex thinking, changing the development of sociality, and mistaking the learning goal,” Mary Helen Immordino-Yang, who is a professor of education, psychology, and neuroscience at University of Southern California, told me. “We are cutting off learning at the knees.”
Even some pro-A.I. education advocates concede that A.I. poses significant cognitive and social-emotional risks to young people. Amanda Bickerstaff is the co-founder and C.E.O. of the organization AI for Education, which provides training for educators and students on generative A.I. literacy. “Children should not be using chatbots under age ten,” Bickerstaff told me. “These tools require expertise and evaluation skills that even many adults don’t have.” Google’s decision to make Gemini available to all ages, she said, marked one of the few times in her career that she has lost sleep over a work-related matter; she recalled thinking, “They so clearly know that this is going to be bad for kids, and yet they’re still going to do it.” Bickerstaff went on, “I don’t think they’re asking really basic questions like, ‘If a kid can immediately make a picture instead of draw one, what will happen to that kid’s ability to think on their own and draw?’ ”
Drew Bent, who leads education research at Anthropic, told me, “It’s not our place as a company to say, ‘O.K., use A.I. at this age—don’t use it at that age.’ ” Like Sinha, at Google, Bent emphasized that his team has focussed more intently on how teachers engage with A.I., through tools such as Amira and MagicSchool, both of which are partly powered by Claude. “You have to already be at a certain level of critical thinking which you develop over childhood,” Bent said. “Before teachers even put an A.I. tool in the classroom, they have to get to those skills of ‘When can you trust the source?’ A.I. models can come off as very authoritative, very confident.” Case in point: Bent was one of two Anthropic employees to tell me that the Claude chatbot is intended for users who are at least eighteen years old. But, when I mentioned this in passing to Claude, it replied with a “small correction,” saying that the cutoff is actually thirteen.
Some of my daughter’s old schoolwork is stored on her new Chromebook, including a slide show she made last year, in fifth grade, about the history of the printing press. I recall gently encouraging her, before the project was due, to rearrange some of the pictures and to reconsider her choice of black-on-dark-blue type design; she just as gently rebuffed me. The other day, for the purposes of this article, we ran the slide show through Gemini’s beautifying and editing process in Google Slides. Gemini scrubbed and buffed the captions; inside thirty seconds, it had symmetrically shuffled the pictures, added a bunch of its own, and revamped the typography, which was now bigger and easy to read, evocative of fifteenth-century movable type, and set against a contrasting backdrop of aged vellum.
Comparing the two slide shows felt, for me, like the mother-daughter pool race in “Mommie Dearest,” with Gemini playing the role of Joan Crawford: I’m bigger and I’m faster; I will always beat you. My daughter was unmoved. “I like mine better, because it’s original and I worked really hard on it,” she said. “I like mine better because it didn’t take thirty seconds.”
Immordino-Yang told me that the ultimate goal of any school assignment is not the finished project itself but the experience of having done it—an experience that A.I. tools are intended to abbreviate or obviate. With their prettifying intrusions and impatient, lurking presence, they block and reroute a young person’s natural, gradual progression toward cognitive maturity, “especially one who is still developing the neuropsychological substrate for creating narratives and thinking through arguments over time,” Immordino-Yang said. “It’s a fragile process, and it’s being interrupted.” Put another way, she said, “We don’t say to the parents of an eight-month-old, ‘Don’t encourage your child to crawl—that’s a useless skill.’ ” (A fixation on what is “useful” and not has also led to the decline of handwriting in school, despite its proven role in the development of motor skills, language-processing, and working memory.)
Amy Finn, an associate professor of psychology at the University of Toronto, told me that “part of the magic of how kids learn is that they have less knowledge of what they’re going to experience and fewer expectations about what’s going to be relevant. They don’t have that adult filter of strategically extracting things from their experience, and so they retain a lot of unexpected details that adults would find irrelevant. That allows them to be creative in ways that adults are not creative.” The child brain’s tendency toward delightful non sequitur and unpredictable meanderings is not aligned with an L.L.M.’s orientation toward speed and sleekness and summary, toward frictionless, rational outcomes. (An obsession with outcomes-over-process is also a hallmark of the universally loathed style of instruction known as “teaching to the test,” which began taking hold in American classrooms in the early two-thousands, after the No Child Left Behind Act tied federal funding to standardized assessments.)
The question of what a child finds relevant or irrelevant also arose in my conversation with Sinha, of Google for Education. I asked him for a few A.I. best-use cases that an elementary-school teacher might consider. “You could use Gemini to create a children’s story that isn’t just an arbitrary children’s story,” he said, “but you could bring in context of your classroom, or even pictures, and work with Gemini then to say, ‘Hey, here’s a storybook that we can all read together that makes it a little bit more relevant, a little bit more personalized.’ ” He offered another example. “Maybe a child had a drawing that they were proud of, and the teacher can select one, and put that into Google Vids”—the company’s A.I. video-generation and editing app—“and animate it into a really interesting video of that drawing, which immediately engages and hooks students in a very different way.” He added that, by using A.I. tools, students are “able to create much more impressive projects that you could have never done before.”
But why and in what ways does a child’s story or drawing need to be “impressive”? Impressive to whom? And should it leave the impression that it was made with A.I.? “This is where I could go back to an educator,” Sinha said. “Like, what do you want from this?”
In the nineteen-twenties, the American psychologist Sidney Pressey invented a “teaching machine,” about the size of a typewriter, that could administer a multiple-choice test and grade it in real time. As Audrey Watters writes in her 2021 book, “Teaching Machines,” the ed-tech innovators of yore—including Pressey’s better-known rival B. F. Skinner—spoke about their devices “in ways almost identical to those who push for personalized learning today, all so that, as Pressey put it, a teacher could focus on her ‘real function’ in the classroom: ‘inspirational and thought-stimulating activities,’ including giving each student individualized attention.” (Skinner once proclaimed that the act of grading papers was “beneath the dignity of any intelligent person.”)
Over a century of technological change, the ideology of ed-tech has remained constant: that the latest innovation—whether teaching machines, Khan Academy video tutorials, or chatbots—is perpetually on the verge of launching a new era in personalized learning, one that will prove liberatory both to overworked teachers and underengaged students. This durable belief was evident in my conversation with Bent, of Anthropic’s education-research team, who spoke of A.I. tools “giving teachers more one-on-one time with students.” He went on, “When a teacher has thirty students, it becomes very hard to track where all the students are at, to create custom activities for all of those students.” But, with Claude, “we see a teacher who has thirty or thirty-five students in their class doing what a teacher would do if they had five students in their class, but just doing it better.”
The feasibility of such a scenario is yet to be established. But a new educator training program called the National Academy for AI Instruction may offer teachers a chance to stress-test some of the many promises that the A.I. industry has made to their profession. The academy, which is headquartered at the United Federation of Teachers’ office in Manhattan, is a joint project of the U.F.T. and the American Federation of Teachers, and is funded via a twenty-three-million-dollar partnership with Microsoft, OpenAI, and Anthropic. The in-person and online classes offered by the academy are intended to help educators “not accept the inevitable but navigate it,” Randi Weingarten, the president of the A.F.T., told me.
On first appraisal, the National Academy for AI Instruction might sound like manufactured consent in the form of a webinar, bought and paid for by a cabal of tech giants. Yet it’s difficult to come away from conversations with Weingarten thinking she’s an A.I. booster, or, for that matter, a supporter of ubiquitous Chromebooks in classrooms. “The more people rely on A.I., the more people are not thinking,” she said. “We need more paper and pencil, more hands-on learning, and fewer screens.”
And if union members disagree with their district’s pro-A.I. policy, or if they don’t want Gemini barging into their students’ workspaces? “We will defend them,” Weingarten said. “This is all going so fast, and part of my goal is to give teachers permission to object.” The teachers’ unions declined to partner with Google, Weingarten said, because the company “would not make the representations about protecting students and staff safety and privacy that we were looking for.” (Sinha refuted this, saying that Gemini complies with federal regulations, that student data is not leveraged for profit, and that chats with students are never seen by humans or used to train A.I. models. Additionally, a Google representative said, in an e-mail, “Based on conversations with our teams internally, we have no knowledge of AFT raising privacy concerns with us before launching” the A.I. academy.)
Other teacher- and parent-led organizations are likewise trying to build permission structures for limiting A.I. use in schools. Craig Garrett, whose child attends a public school in Brooklyn, told me that he started a WhatsApp group of concerned parents in June, now called District 14 Families for Human Learning, after he discovered that his then kindergartener had been reading to the Amira bot in class all year. (Activists have questioned whether classroom use of Amira, by recording students’ voices, violates a New York State education law forbidding the “unauthorized release of personally identifiable information.”) Garrett is also part of the Coalition for an A.I. Moratorium, a citywide group of educators, parents, and students that is petitioning the New York City mayor, Zohran Mamdani, and Kamar Samuels, the schools chancellor, for a two-year pause on A.I. in K-12 classrooms.
Also part of the coalition is Naveed Hasan, a public-school parent in Manhattan who serves on a citywide advisory committee on education, and who, as a computer scientist, has worked in A.I. for more than twenty years. “I have a philosophical problem with private companies trying to make intelligence into a utility,” Hasan told me. “They tell us not to worry about intelligence—we will let you subscribe to it, and you will be free to do other things.” He went on, “We need to influence the mayor, and to influence everyone who works for the mayor, to get him to order a stop to all this.”
Members of the Coalition for an A.I. Moratorium maintain that few teachers or parents appeared to have been consulted on New York’s preliminary A.I. guidelines, which do little to address privacy concerns or the potential negative effects of A.I. use on students’ brain development and mental health. The city D.O.E. official overseeing the guidelines, Miatheresa Pate, is a current recipient of a fellowship jointly offered by Google and GSV Ventures, an ed-tech venture-capital firm whose portfolio includes Amira and MagicSchool. (Other names on the current Google-GSV fellowship roster include top school officials in Berkeley, Dallas, Los Angeles, and Newark, and statewide officials in Colorado and Maryland.) “If you ask tobacco companies to help write your school’s policy on cigarettes,” Garrett quipped, “you’re going to end up with guidance on how to smoke responsibly in school.” (In an e-mail, a D.O.E. spokesperson said that more than a thousand “stakeholders,” including families and educators, were “engaged” in drafting New York’s preliminary guidance, and added that, while Amira and MagicSchool are used in some schools, the city “has no centralized contract for either product and use is determined at the school level—not by Dr. Pate.”)
A kindred group, Schools Beyond Screens, was formed last year among parents in the Los Angeles Unified School District, where the superintendent, Alberto Carvalho, is currently on administrative leave following F.B.I. raids of his home and office, in February, allegedly over his ties to a bankrupt ed-tech company that was developing an A.I. chatbot for kids. (Carvalho, who has denied any wrongdoing, is also on the board of Code.org, purveyors of Mix & Move with AI.) Among the goals of Schools Beyond Screens is to enforce closer scrutiny of the lucrative contracts that urban districts enter into with tech companies. “The money spent on tech platforms and replacement Chromebooks is money that could be going to teachers,” Kate Brody, the mother of a first grader in an L.A.U.S.D. school, told me. The group also wants districts to establish clearer consent guidelines around the use of digital platforms and to adopt a Student Tech Bill of Rights, which includes the right to “read whole books,” to “regularly read and write on paper,” and to “a low-stimulation learning environment.”
“It still feels like there’s no place to say, ‘As a family, we don’t believe in this. We don’t think it’s right,’ ” Brody said. “My primary concern with my kids using A.I. is cognitive, but for other parents it’s moral, it’s ethical, it’s environmental. These things were rolled out so quickly, with no consent, and now we are trying to dismantle them.”
What Brody and others are trying to dismantle is already part of a daunting corporate and technological superstructure. Yet there is nothing eternal or canonical or irreversible about this system. Gemini is new, but the spectacle of children hunched all day over a median-nerve-shredding computer manqué is itself a relatively recent and, it would seem, plausibly impermanent phenomenon. Chromebooks in classrooms are not inevitable; we could choose to see them as a stubborn but eminently killable weed of the pandemic, like QR-code menus in restaurants. (The Times recently published an excellent story on how “Chromebook remorse” is taking hold in many U.S. school districts.) Nowhere is it written that a multinational conglomerate with a market cap of roughly four trillion dollars is fated to command our public schools, or to grant fellowships to the leaders of those schools, or to monetize the inefficient children who attend them. Another item in the Student Tech Bill of Rights, in fact, is the “right to a learning environment that is free from undue corporate influence.”
Brody told me that anti-A.I. advocacy in education is tricky because screens have become virtually synonymous with school, and A.I. is increasingly synonymous with screens. “You have to be more surgical about it than with a lot of other problems,” she said, “unless you’re going to, like, take the computers and chuck them into the sea.” But why not? I thought back to what Sinha had asked me: “What do you want from this?” What if the answer is nothing? ♦