humancode.us

Chat Control is Surveillance

October 6, 2025

Client-side scanning is so absurd, so dumb, so easily abused and bypassed by criminals, that the only practical targets that will fall under Chat Control are the masses of casual users who aren’t using encrypted services for anything nefarious.

People sending illicit content will quickly wise up and avoid using endpoint software that perform monitoring, either by building their own clients with modified source code (after all, a server with published API can’t verify that their clients actually performed any kind of scanning), or by building another layer of encryption, so that data sent through the endpoint software are already encrypted, relying on a meta-client to do the actual encryption and decryption.

In short—like every attempt to bypass end-to-end encryption—Chat Control isn’t crime control; it’s surveillance.

Beware of people shouting “think of the children!” as they push surveillance on the masses, because they don’t actually care about the children at all.

https://www.eff.org/deeplinks/2025/09/chat-control-back-menu-eu-it-still-must-be-stopped-0

Chat Control Is Back on the Menu in the EU. It Still Must Be Stopped

The European Union Council is once again debating its controversial message scanning proposal, aka “Chat Control,” that would lead to the scanning of private conversations of billions of people. Chat Control, which EFF has strongly opposed since it was first introduced in 2022, keeps being mildly...

A good way to think about automation and AI

October 2, 2025

When considering whether an automation is fit for a task, it’s worth thinking about it this way:

  • How often does it successfully do what it’s supposed to do, and how much convenience does it bring when it succeeds?
  • How often does it fail to do what it’s supposed to do, and how bad is the consequence of its failure?

The worse the consequence of failure, the lower the probability of failure has to be. When failures cost lives, a one-in-a-billion failure rate means 8 people will die if you run the automation once for every living person in the world.

AI fails to do its job at a remarkably high rate, so it should only be used in cases where its failures are merely disappointing but otherwise inconsequential.

Putting AI into systems where their failures have catastrophic consequences—like using it to target people for pre-crime surveillance, or deny people financial or civil benefits—is a gross misapplication of the technology.

The politeness trap

September 25, 2025

Politeness is a shield used to protect the violent.

Politeness is used to silence the exploited, to give an excuse to mute the shouts of the downtrodden. It’s a barrier erected so the powerful never has to hear the complaints of the powerless.

This is a corollary to Insist on good faith and Assume Good Faith.

Insist on good faith

September 13, 2025

During discourse, do not insist on civility, but good faith.

It should be acceptable for a person who has suffered a grave injustice to shout in rage, and it should not be acceptable for a charlatan to mislead with eloquence.

This is a corollary to Assume Good Faith.

AI should not simulate real humans

September 6, 2025

I was reminded of the “Booby Trap” episode of Star Trek: TNG where Geordie (accidentally) gets the computer to simulate Dr. Leah Brahms, the inventor of the Enterprise’s warp engine, inside the holodeck to help him solve their conundrum. Geordie decides to imbue the simulation with (what the computer thinks is) her personality, and proceeds to fall in love with her simulation.

In a rare continuity play in STTNG, the real Dr. Brahms appears in the next season. Geordie excitedly approaches her, expecting to develop a relationship that parallels his holodeck experience, but the real Dr. Brahms turns out to be unfriendly…and married. She later finds Geordie’s wildly inaccurate and romantic simulation of her, and is justifiably horrified at the invasion of privacy.

This episode aired in 1989, but was prescient. There is a strong desire today to use AI to simulate loved ones who have passed, or historical figures, or celebrities, or someone out of reach.

The ethics of simulating a real person in AI is exceedingly fraught, to say the least. At best, you get an inaccurate caricature, which may project unfair stereotypes: you get a “chat with Abraham Lincoln” that gives you a grotesque simulation of an historical figure full of tropes; at worst, you turn your ex’s likeness into a subservient puppet that says and does what you want.

The realism of the simulation is what makes this space so problematic today: when the product successfully crosses the uncanny valley, it risks the user forgetting that they are interacting with a mere simulation of a person, and believing that they are interacting with the actual person. I think this crosses boundaries.

I view AI products with suspicion, but I view AI products that simulate actual people with disdain. An AI that purports to let you “talk to your deceased loved ones” is a cruel and disrespectful parlor trick, and should be met with disgust.

Anyway, I love Star Trek.

< Newer Posts