humancode.us

Why does AI feel like theft?

March 29, 2026

One thing that continues to grate on my conscience about AI is how artists and writers consistently feel that the technology has stolen from them. We all know that web scraping is (and should be) a perfectly legal and acceptable use, because preventing it also prevents all sorts of beneficial behaviors—the Internet Archive wouldn’t be able to exist, for one thing.

But yet, the very nature of AI takes scraped content and regurgitates it as a pink-slime extrusion that it feeds back into the web. And to creators, that just feels wrong; it feels like stolen valor, it feels like exploitation.

And it’s something I can’t (and shouldn’t) shake from my mind each time I see something made by AI. Just because something is legal doesn’t mean it isn’t abusive and unethical. Scolding people who complain about AI by telling them that web scraping is good, actually, doesn’t address the main complaint: that somehow, these AI assholes have exploited a common good and we can’t quite figure out how to stop it.

Many people I know have taken to large-scale AI-assisted coding with no qualms, because the tech can be useful in many cases, especially when the users are already software experts. But even when a technology is useful it doesn’t mean it is ethical to use, and it’s impossible to see AI output without also seeing the masses of creators whose works have been scraped, many of whom feel like they’ve been used and exploited.

If ever there was a time to resist this technology, it’s now. We are at an inflection point, and we can either jump in headlong and profit from AI as it stands today, or we can help put the brakes on, slow things down, and take the time to work out how (or if it’s possible) to use this technology ethically.

Listen to the creators. They almost universally feel exploited by AI. Try to figure out why that is, and why our norms don’t account for that.

One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?

In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.

Newer Post