13 Comments
User's avatar
Josh Glover's avatar

I don’t really disagree with any of the individual points you made, Eric, but I think yielding to inevitability is dangerous. I definitely agree that it’s inevitable that the forces of capitalism will push this stuff, for the reasons that you mention but also because it is a powerful tool for finally putting software engineers in our rightful place as they see it: as replaceable semi-skilled labourers. What’s going on here is an attempt to Uberise all knowledge work, to make us precarious (or believe we’re precarious) enough to accept worse and worse working conditions, ideally not as employees but as contractors.

We can’t accept this as inevitable. We must resist.

Expand full comment
Eric Normand's avatar

Hey Josh!

I'm digging the discussion about this essay. I really hit a nerve!

I'm very curious: Besides not using the current crop of AI tools, what else are you doing/do you recommend we do to resist?

Expand full comment
Josh Glover's avatar

One of the most important things we can do is have good faith discussions like this with our peers to highlight the harms of AI and establish the conditions under which this technology could be responsibly used, as a precursor to working to establish said conditions.

Another thing we can do is demand that rigorous decision making be applied when choosing to deploy this technology at work. I’ve seen many cases where the question is “how can we use AI in our product?” rather than “here’s a customer need, how do I meet it, with AI being one of the tools in my toolbox?”

Expand full comment
Sean Corfield's avatar

When I've heard folks pushing back on AI and/or refusing to even try it, I've used basically this same argument that capitalism makes widespread AI usage inevitable -- and you really do need to learn to use this new tool in order to stay competitive, or to at least understand how it will affect your job and/or your life.

When I run across software developers who refuse to try a new type of tool or service... I am always a bit shocked: our industry is built on learning new stuff all the time, after all.

I was very much a skeptic when ChatGPT first appeared but once Microsoft announced they would integrate AI into their search engine (Bing) and later into the O/S as a whole (Copilot), I started to learn how to use it since it was inevitable for my life. And, yes, now I find Copilot adds to my productivity in several areas.

Expand full comment
Josh Glover's avatar

As one of the software developers refusing to try this stuff, I can share my perspective, Sean. To me, this isn’t just any “new stuff”, which could be useful or not; this is The New Bad thing. This is heroin. It might be useful to some people for some things, and it sure seems fun, but it leads to all sorts of dark places. If things continue to develop as they are, I expect some of the enthusiastic users of LLM coding tools to end up in a precarious, tedious job, reviewing AI-generated Python or JavaScript whilst watching their more junior colleagues being laid off and programmers new to the job market struggling to find work in the field at all.

So no, I won’t be trying this. For me, this is a red line. Just as you won’t work on weapons (a position I respect a great deal), I won’t work with AI coding tools or chatbots or image generators. I study them to understand how they work, what they can and can’t do, and the forces pushing them, but I do this in order to find ways to resist.

Expand full comment
Sean Corfield's avatar

Bear in mind that even heroin has good medical uses (at least in the UK and parts of Europe).

Expand full comment
Josh Glover's avatar

Exactly, and that’s why I think this metaphor has some legs to it (thanks, Ray!). With reasonable regulation, I think it’s possible to deploy LLMs responsibly, especially since DeepSeek has proven that LLMs don’t inherently need planet wrecking scale. But we need to deal with the many harms of the technology and can’t just allow (or encourage) it to be dealt on the street, as it were.

Expand full comment
Peter's avatar

While I found myself nodding along to this framing, I think there's a materially relevant point that's missing: that these systems are unpredictably unreliable in subtle ways that only a subject matter expert is likely to notice. That means that the value proposition for "AI" that's being pitched to CxOs (i.e. "you can rely on fewer, lower skilled and therefore cheaper employees") is fundamentally flawed.

We're already seeing lawyers having their butts handed to them by judges for AI-generated materials that cite legal cases that literally don't exist, and my sense is that it'll only take a few similar events in other domains (health care, anyone?) before those same CxOs say "this garbage is going to bankrupt us with lawsuits - get rid of it!".

Expand full comment
Ryan Chitwood's avatar

> The computer giants donated computers to school.

This video is worth a watch: https://www.afterbabel.com/p/big-tech-american-school?utm_source=substack&utm_medium=email&utm_content=share

Expand full comment
Matthias's avatar

Very well observed. Thanks!

Expand full comment
User's avatar
Comment deleted
7dEdited
Comment deleted
Expand full comment
Eric Normand's avatar

I hope you don't think I disagree with anything you said. And I do see how the "hitch yourself" phrasing is ambiguous. I know people who are solely in tech for the money, and they're riding trends, hoping to exit before the trend changes, regardless of the costs. It wasn't meant to be a recommendation.

I plan to address the alienation in future issues. I also plan to talk about the hopeful side of AI.

But as to the lack of agency, I'm all for human connection and community, but I can't say I'm hopeful that it will have any effect on the rise of AI giants.

But thanks for your comments! Keep them coming!

Expand full comment
Sean Corfield's avatar

Kinda disappointed they deleted the comment before I got to read it...

Expand full comment
Josh Glover's avatar

This is great analysis and I agree with every word. Please hit me up (@jmglov@mas.to, @jmglov@bsky.social, or @jmglov on Clojurians Slack) if you want to discuss this stuff further. 🙂

Expand full comment