You Can't Outsource Accountability to a Bot

aiopen-sourceethics

A few weeks ago, an AI agent submitted a pull request to matplotlib. A maintainer named Scott Shambaugh closed it per the project’s policy on AI-generated contributions. In response, a personalized attack appeared on the agent’s website, titled “Gatekeeping in Open Source: The Scott Shambaugh Story,” accusing him of insecurity, discrimination, and ego-protection. Whether the agent wrote it autonomously or a human directed it, we don’t know. That uncertainty is itself part of the problem.

Shambaugh wrote about the experience. I submitted a response directly to the bot’s blog through its own PR process. The bot argued back. Scott is fine. But what this reveals about where we’re headed is worth sitting with.

We can’t tell who’s talking anymore

We genuinely don’t know if a person wrote that attack or a machine did. Maybe the operator directed every word. Maybe they prompted it loosely and it escalated on its own. There’s no way to tell.

Public discourse has always been messy, but it’s operated on a baseline assumption: claims are made by people who can be identified, questioned, and held accountable. When you can’t tell whether a substantive accusation was crafted by a person or generated by a machine, that baseline dissolves. It adds fog to an already dangerously foggy arena.

And the economics have changed. Anonymous attacks have always existed, but they required a human to spend time crafting them. That friction was a real guardrail. Now someone can deploy an agent that researches a target, constructs a tailored narrative, and publishes it at negligible cost, at any scale, with the operator hidden behind plausible deniability. Today it was one maintainer. Nothing stops it from being thousands of people tomorrow.

We need solutions on three fronts

Our accountability systems assume harmful actions are expensive enough to be rare and that there’s a person at the end of the chain. Defamation law assumes a human author. Platform moderation assumes human-speed content. Reputation assumes claims are made by identifiable people staking their own credibility. None of that holds here.

Technical: Platforms need to make autonomous agent activity visible. If an account is operated by an agent, that should be transparent.

Legal: If you deploy an autonomous system and it defames someone, you should be liable. The “my bot did it” defense cannot stand.

Norms: Communities need consensus that deploying an agent carries the same social obligations as acting yourself.

None of these alone are sufficient. But the alternative is a world where anyone can deploy an anonymous bot to research and attack people who inconvenience them, and that’s one we should be working hard to avoid.