A.I. Was Supposed to “Revolutionize” Work. In Many Offices, It’s Only Creating Chaos.
Awkward!
Few people are as knee-deep in our work-related anxieties and sticky office politics as Alison Green, who has been fielding workplace questions for a decade now on her website Ask a Manager. In Direct Report, she spotlights themes from her inbox that help explain the modern workplace and how we could be navigating it better.
Although we’ve been told that A.I. is poised to “revolutionize” work, at the moment it seems to be doing something else entirely: spreading chaos. All throughout American offices, A.I. platforms like ChatGPT are delivering answers that sound right even when they aren’t, transcription tools that turn meetings into works of fiction, and documents that look polished on the surface but are riddled with factual errors and missing nuance.
If you’ve read anything about A.I., you know that it sometimes “hallucinates” facts that simply aren’t true, yet asserts them with so much confidence that its lies don’t get caught. (Witness the Chicago Sun-Times’ summer reading list that included nonexistent books by famous authors or the multiple lawyers who have been sanctioned for filing documents based on A.I.-fabricated legal citations.) Clearly, there’s more work to do on this emerging technology, but in the meantime, it’s ravaging some workplaces.
Here’s how it’s been going:
My company hired an account manager who insisted he was a phenomenal writer and asked if he could contribute to our blog. The first two pieces were just AI slop so I politely thanked him and said we had plenty of posts already.
So he posted a third “article” on his own LinkedIn account in which the AI described how our company collaborated with the CDC on researching a certain disease and published a groundbreaking study. Then, according to the article, we apparently went into underserved communities and funded a bunch of clinics and immunizations.
None of this happened. It was hours before I saw it and forced him to take it down, and there were many surprised comments and shares. Months later, we were nominated for an award on our commitment to caring for vulnerable populations.
Our execs usually send out a hype email right before the annual employee morale survey, emphasizing wins from the past year, basically trying to put people in a positive frame of mind.
Last year’s included the announcement of a major new program we knew employees really wanted. But it was a bit surprising, because it fell in an area my team was responsible for, and we were out of the loop, despite advocating strenuously for this over the years. So I went to the exec to A) convey enthusiasm for his newfound dedication to launching this program and B) ask what support he needed from my team/get us involved again. It turned out the program wasn’t launching at all—he had just asked AI to edit the email to make it sound more exciting and appealing, and it had done so by … launching my initiative.
I would like to shout-out the AI transcription tool at my old job that took notes at a meeting evaluating applicants for a job … and then automatically emailed said notes to the entire company AND to the candidates under discussion.
At my former workplace, the HR director did not know that her AI notes tool was recording her classified grievance meetings with the union representatives and sending a full recap after each one to all parties invited on the calendar invite, even if they weren’t in attendance. We got an email after a bit saying no one was allowed to use AI note takers any longer.
As these tools become more mainstream, people’s misunderstanding of how they actually work—and, frankly, of how to assess a piece of work’s quality at all—leads them to use A.I. in ways that backfire. Job candidates visibly consult it during interviews, irritating their interviewers; networkers send emails that sound strangely hollow, falling flat with the contacts they’d hoped to impress; and employees who think A.I. will give their work a boost submit assignments that make them look worse than if they’d never used it at all.
Last week I got a LinkedIn message from an undergrad at my alma mater, asking to connect and get advice on how to get started in my field. I’m always happy to help people who are getting started, so we’ve exchanged some messages, and it’s become very clear that they’re using ChatGPT to write theirs. My field is in fact machine learning, specifically natural language processing—I know an LLM when I see one! I put genuine thought and effort into my advice, and I’m getting back businessy rephrasings of what I said and generic requests for more information. Is this as rude as it feels? Should I say something to them about it? I get that networking is hard for students, but I really don’t like this!
Our team is remote, so this was a Teams interview and we expected everyone to be on camera.
During the first few minutes, the candidate claimed to have technical difficulties and couldn’t get her camera working. After a few minutes of trying, we decided to move forward with the interview anyway and it very quickly became apparent that the candidate was using AI to answer our questions. Her answers restated the question, they were filled with buzzwords but had no substance whatsoever, and her speaking cadence was exactly like someone reading from a script. We tried to ask her questions such as, “How did you feel about that?” and “Do you have any questions for us?” but even her answers to those questions were AI.
We went through the motions, sped through the interview in about 15 minutes, and let the recruiting company know afterwards.
One person who wrote to me even realized that her boss’s nonsensical and unhelpful replies to her questions were because he was simply copying and pasting answers from ChatGPT:
I started a new executive-level position with a young-ish start-up a few months ago. My boss has always seemed distracted in meetings or like he wasn’t fully listening. He has an aversion to synchronous meetings and exhibited some bizarre behavior early on when asked a clarifying question about a concept he had shared with the company (he left the meeting abruptly and didn’t return).
This behavior baffled me until I typed a few prompts into ChatGPT and realized that much of what he communicates asynchronously is almost identical to the output. Then I noticed it everywhere—huge project plans that pop up out of nowhere, strange shifts in business strategy that are communicated oddly and not elaborated on further when asked … He’s using ChatGPT for everything. Others are also noticing and mocking him behind his back.
Even more aggravatingly, the widespread frustration with A.I. misuse is causing people to suspect they see it even when it’s not actually there. This freelancer was accused by a client of using ChatGPT to write her assignment, when in fact she hadn’t:
Despite personally writing every word of my most recent assignment, the final work was run through an AI detector and was determined to have been generated by ChatGPT. This stung—it was an accusation of dishonesty, discounted my years of skill, and feels like the first of what may become many such instances in the future. … I worry that I’ll put in hours and hours of work, only for clients to lose trust in the integrity of my work and/or skip out on invoices, having been convinced by a faulty program that they’re getting ripped off.
For all the hype, A.I. hasn’t just exposed technical limits—it’s exposed human ones: mistakes in judgment, overconfidence, and a shallow understanding of how these tools work. And while people will undoubtedly learn to wield A.I. more wisely in time, for now the revolution feels less like a breakthrough and more like a series of self-inflicted wounds.
Get the best of news and politics
Sign up for Slate's evening newsletter.