Recently, Hacker News updated the comments section of their guidelines:
Don’t post generated comments or AI-edited comments. HN is for conversation between humans.
And increasingly this is turning to be a key point in human interaction online, which also affects workplace dynamics. I’m not referring only to remote work, because in all the companies I have worked for in the last 20 years there has always been a digital component: some sort of internal instant messaging, email, code reviews, etc.
Before LLMs –because we had AIs before!– there was Stack Overflow and web search, and people would generally say “this is what I found”, or process the information to adapt it to the subject at hand and produce some output. For example, generally, nobody shared a significant piece of code from SO “as-is”, like it was them writing it.
OK, fair point that some people did it, and we could argue that it was the bottom of the barrel. Ineffective sorts exists for a reason:
StackSort connects to StackOverflow, searches for ‘sort a list’, and downloads and runs code snippets until the list is sorted.
But where I am going with this is: when you are having a conversation with someone on Slack and you detect they are feeding you LLM output, it is very demoralising. May be it will be a point for the back to the office policies: you don’t get slop when you talk to your teammates in person. For now, at least.
Besides, it isn’t that easy any more to tell when some text was LLM generated. Turns out that people that spend a lot of time dealing with LLMs are picking up the style, like emdashes or the key points summaries. Which is full circle, because the LLMs picked up those stylistic choices from somewhere human when they were trained.
It can also happen in code, when you are reviewing pull requests. This person is asking for a review of their changes, but without the certainty you see yourself considering:
- They used an LLM to write the code, that’s why it came out like this. sigh
- OR may be not? May be they don’t know better.
- BUT if they used an LLM, why did they submit this to review without refining it?
None of the options is satisfactory. People will use the tools that they have available, we can’t do anything about that. But we can always ask for human interactions. Or they can share the prompt and be out of the picture!
Please, be human.
Would you like to discuss the post? You can send me an email!