The Safety Valve

5 min read Original article ↗

“No sympathy for the devil; keep that in mind. Buy the ticket, take the ride…” — Hunter S. Thompson

There’s a small company in Austin that researches seismic activity. Every day, they work to develop software that records and analyzes potential earthquake indicators. In the depths of the earth, things can shift with unimaginable force. The engineers at this company study these movements so that when the ripples from down there reach us up here, we can be ready.

As I set up my first interview with one of their junior developers last week, it struck me that many of us have a similar attitude towards our careers. Comparing the impact of AI with an earthquake feels trite at this point, but I know that I approach every new development with a similar sense of fearful anticipation. I am always listening, constantly on the lookout for the ground moving from under me.

Marko is a 24-year-old software engineer. He graduated from college two years ago, and this seismic research company is his first job.

He is quick and witty, the kind of person who explains his workflow in detail with a big, bright smile. He is also the kind of person who uses many metaphors. Within the first ten minutes of our conversation, AI had been compared to an enslaved grad student, a little monkey, and a very complex set of tubes.

“I use Cursor every day, in almost everything I do, but I treat it as an overseer, and I am a sort of safety valve,” he said.

The phrase safety valve stood out to me as particularly interesting. It’s a role Marko carved out for himself. He’s not afraid of AI, and he’s not in awe of it either, though he plans to start a master’s program in machine learning next year.

“I’m a researcher at heart,” he said, “so I’m definitely interested in how AI develops.”

I tried to look for cracks in his confidence, but he kept smiling and answered every question with ease. I asked him what would happen if Cursor went down for a day. How much less productive would he be?

Marko shrugged. “Productivity will definitely go down. But I’ll be fine. I’ve been actively learning on my own. I feel like class of ‘24 was the last one where we actually needed to write our own code in school.”

He told me about the day ChatGPT’s servers went down last year. The internet panicked, and developers who had completely outsourced their thinking ground to a halt.

“Honestly, if you just use AI and don’t learn anything yourself, you’re a dumbass, and you will get fired eventually.” He laughed when he said it, but I could tell he meant it. To him, I realized, it all boils down to personal choice. Buy the ticket, take the ride. The same goes for the companies developing AI. The technology itself is almost beside the point.

“If this technology ends up hurting us, it will only be our fault, our failure as a society. We have the power to decide right now,” he continued.

What surprised me most about our conversation was how much we ended up talking about people. He told me about a coworker who ran into trouble working on code written by a previous developer. There was a lot of friction at first, but then the coworker and the senior developer went for a walk, talked it out, and came back understanding each other better.

“That’s the human side of working together,” Marko said. “An AI can fix any coding problem you ask it to, but it will never be able to solve these interpersonal issues.”

I asked him whether he felt junior developers were being left behind.

“It depends,” he said, and he continued with a story. He told me about a guy he helped get an internship at their company the previous summer. He had autism and required substantial support in communication, but Marko kept vouching for him.

“I told my boss, ‘Look, he has special needs, but I know him, and he’s very smart. Please just give him a chance.'”

For the first time in our conversation, he paused and thought for a bit before continuing.

“My boss could have said, ‘Listen, he’ll slow us down, it is what it is.’ But he decided to interview him, and he ended up working with us for the summer, and he learned. He ended up transferring from ACC to Texas A&M and studying geophysics, which is what we focus on.”

I asked him if he thought many other companies would take the same risk.

“I guess it comes down to what you’re governed by,” he finished. “Time to solution, or something else?”

There’s no sympathy for the devil in Marko’s worldview, but there is also no villain behind AI, no one to blame in particular. He has decided what kind of passenger he wants to be on this ride, and he is holding that position with both hands.

I’ll be honest: I went in expecting a fellow catastrophizer. I know I have caught myself spiraling when thinking about the future of my career, and, like most of my peers, my thoughts often turn apocalyptic. But if Marko felt uneasy about the subject, he did a great job of hiding it. And, now that I think about it, I’m glad that this first interview started us off on a confident note.

As it turned out, I didn’t have to look far for the existential dread I was after. My next interviewee, Alex, more than made up for it.

Discussion about this post

Ready for more?