Let me first clarify that I’m not opposed to AI. I’ve personally architected, coded, and trained many models. I use ChatGPT, Stable Diffusion, and other generative technologies more far more than the average person. These are powerful tools that absolutely have their place- AI helped me edit this post, for example. I’m cautiously optimistic that with the right guardrails and careful product design, AI could augment human creativity, improve education, and even support mental health. Without those guardrails though, today’s AI is a real dystopian shit show.
Press enter or click to view image in full size
Meta recently debuted AI profiles. Fortunately, there was intense backlash (score one for humanity), and the feature was shut down. What troubles me about this is not the use of AI itself, but the intermingling of AI and social relationships: the explicit intention that bots should replace our friends. Indeed, when one friend of mine showed these profiles to her dad, he didn’t understand that these were not real people. It should be obvious to any product designer that a tiny “AI managed by Meta” disclaimer is not enough to clearly differentiate real humans from fake ones. Make no mistake: these were intentionally designed to be mistaken for humans, and that should infuriate us. I previously lamented about the decline of society and technology’s role in that. We urgently need our technology to support communities and (real) human connections, not surreptitiously replace them with lookalikes!
Press enter or click to view image in full size
I’m more familiar than most about how these kind of things get built. At different points of my career, I’ve been the software engineer, AI expert, product manager, UI/UX designer, QA, management. My wife previously worked on LLMs at Meta. So, I gotta wonder: who were the folks who built this and thought “yup, seems like a great product. No concerns here- ship it!”. Where are our ethics in tech?
The culture of Silicon Valley celebrates disruption and change, with little regards to the consequences. Facebook famously coined “move fast and break things”. That’s cute and pithy, and charitably, I hope they originally meant “it’s cool to mess with the dev branch,” not “it’s cool to mess with the fabric of society”. Palantir surveils people. Tesla releases “beta self-driving” that kills people. Such is life in Silicon Valley.
Press enter or click to view image in full size
It’s a good thing that Meta foresaw some some of these ethical challenges, and formed a Responsible Innovation team made up of experts in “anthropology, civil rights, ethics, and human rights.” It’s really, just too bad that the team was disbanded in 2022, and those experts are no longer with Meta. After all, the team of ethicists wasn’t in the best interests of “shareholder value”.
So, I come to you, my colleagues in tech- find your principles! Think deeply about the things you are asked to build, and the consequences. Take the Hippocratic Oath and make a commitment to “do no harm”. Read history. Don’t repeat it. Recognize what the banality of evil looks like in 2025, and don’t reduce yourself to be merely a cog in this machine.
You might think that you have no influence. Let’s remember back in 2018, Google was working on Project Maven, a collaboration with the US Department of Defense to use AI for surveillance drones. Over 3,100 employees signed a letter to Sundar, and Google abandoned the project. I would have liked to see the same thing here. Discuss concerns with your coworkers. Ask the hard questions. Rock the boat.
Press enter or click to view image in full size
I hope that someone at Meta is reading this. Regardless of where you work, I challenge you to think long and deeply about what you and your company are building, what the long term impacts might be, and whether that’s the future you want for yourself and your children. We each have a part to play in defining our future. Please, please, stop making it suck.