theodp writes: In a year-end podcast, GeekWire noted that Microsoft President Brad Smith offered his own evidence to investors that AI-is-real at Microsoft's Annual Shareholder Meeting in December, explaining that he relied on Copilot’s Researcher Agent's memory (YouTube, audio) earlier in the day to recall and explain an issue for company leaders that Microsoft faced seven or eight years ago (to help them deal with a similar problem they now faced), and it generated a 25-page report with 100 citations that so wowed his colleagues that they clamored for him to share the prompt he used to produce it so they too could learn how to use AI so effectively. While Smith didn't share either the report or prompt with investors in the webcast, the anecdote alone left his fellow Microsoft execs nodding and smiling in amazement (GeekWire couldn't resist wondering aloud how many of the recipients used their AI agents to summarize the 25-page report rather than having to actually read it).
Reminiscing about Def Leppard in her weekly Ed-Tech and AI newsletter Second Breakfast, watchdog Audrey Watters on Friday painted a much bleaker picture of the what-me-worry-about-thinking AI utopia presented to Microsoft investors, cautioning: "Our understanding of the world — knowledge, memories, skills — are never, as are the versions of these things fixed in print or in the machine, inert. And importantly, the more we know, the more we practice knowing — thinking, reading, writing, imagining, talking to one another — the more we strengthen our ability to know. And the inverse is true too: the less we practice, the weaker our cognitive powers. The more superficial and scattered our mental activities – skimming, clicking — the more shallow our thinking. The more we 'outsource our thinking' to 'AI' (hell, to the computer or the Web), the more we might find ourselves unable to think deeply at all. [...] There's a product, but there is no process for you, the user. No discernment, no contemplation. No recollection or consolidation of earlier thoughts and ideas and memories. No cognitive effort through which you will think or learn or know or grow or ever remember any of this."
Sharing Watters' concerns, The New Yorker's Jessica Winter asks, What Will It Take to Get A.I. Out of Schools? "The tech world assumes that A.I.-aided education is necessary and inevitable. A growing number of parents, educators, and cognitive scientists say the opposite," Winters begins. She closes with a reminder that "Nowhere is it written that a multinational conglomerate with a market cap of roughly four trillion dollars is fated to command our public schools, or to grant fellowships to the leaders of those schools, or to monetize the inefficient children who attend them. Another item in the Student Tech Bill of Rights, in fact, is the 'right to a learning environment that is free from undue corporate influence.'"