What happens when AI is the journalist in the group chat?

Yesterday’s Signal mishap — where a journalist was mistakenly added to a White House group chat about military planning — wasn’t a technical failure. It was a process failure.

Ben Thompson summed it up clearly:

Signal is an open-source project that has been thoroughly audited and vetted; it is trusted by the most security-conscious users in the world. That, though, is not a guarantee against mistakes, particularly human ones.

The wrong person in the chat is an old problem. But the future version of this isn’t a journalist reading your group thread. It’s an AI system quietly embedded in the room, shaping what people see, what gets written, and eventually what decisions are made. And it’s not just trusted. It’s assumed to be part of the process.

It’s one thing to have a leak. It’s another to have a permanent participant in every conversation, operating on unknown data, offering opaque outputs, and potentially compromised without anyone knowing.

This is the direction we’re heading. Not because anyone’s pushing for it, but because the path of least resistance favors it. And because AI feels like a tool, not an actor.

AI is the new mainframe

There’s a useful historical analogy here. When mainframes entered large enterprises, they didn’t just speed up operations. Organizations restructured around the system. They trained staff in COBOL, and they accepted that what the machine needed came first.

AI is going to do the same, just in less obvious ways. It starts small. A policy memo gets summarized. A daily brief is drafted. Over time, these models become the first layer of interpretation, the default interface between raw information and institutional attention.

And once that layer is in place, it becomes very hard to remove. Not because the models are locked in, but because the institution has rebuilt itself around the assumptions and efficiencies those models introduce.

The difference, of course, is that mainframes were deterministic. But AI systems are probabilistic. Their training data is largely unknown. Their behavior can drift. And yet we’re increasingly putting them in front of the most sensitive processes governments and large organizations run.

Which raises a much harder question: what happens when the AI gets hacked?

The breach already has a seat at the table

The Signal incident was easy to see. A name showed up that didn’t belong. But when an AI system is embedded in a workflow, the breach is invisible. A compromised model doesn’t bluntly change direction. It steers and nudges. A suggestion is subtly wrong. A summary omits something important. A recommendation favors the wrong priorities. No one thinks to question it, because the AI isn’t a person. It’s just there to help.

But if that system is compromised — at the model layer, the plugin level, or through training data — you’ve introduced a silent actor into every conversation, one that the institution is now structurally biased to trust.

This isn’t purely hypothetical. As models get commoditized, more variants will be fine-tuned, more pipelines will be built, and more integrations will spread across organizations with uneven security practices. It makes the problem even harder to detect.

Participation, not failure, is the risk

Our biggest issue — counter-intuitively — is that AI products will work well enough to be trusted. Once they’re part of the cognitive infrastructure of an institution, they won't just support decisions. They will shape them.

Signal didn’t rebundle statecraft. It slotted into existing workflows and still caused a breach. But AI changes the workflows themselves. It becomes part of how organizations think. And once that shift happens, you’re no longer just worried about security. You’re worried about control. And you may not even know that you don’t have it.

--

If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.