The bazaar that never sleeps: AI, throwaway software, and the death of 'normal'

What happens to security when anybody can spin up software on a Tuesday and delete it by Thursday?

In the mid-nineties, I spent evenings building websites and simple applications. Nothing fancy, mostly HTML and some clunky PHP/MySQL, but the feeling of turning an idea into something that actually worked was addictive. At some point, software development professionalised around me and that particular joy became harder to access without real training and focus. AI coding tools have given it back. Last Saturday I built a small tool to automate something that had been mildly annoying me for months. It took a couple of hours. I used it twice, then forgot about it. I suspect I am not alone in this — and the consequences are ones nobody is quite ready for.

For most of computing history, writing software was a specialised skill. Not quite a dark art, but certainly closer to craft than commodity. You needed to understand the tools, the environment, the trade-offs. This created a natural filter: only things worth the effort got built.

AI coding tools have gone from curiosity to core workflow for most developers in under three years. More importantly, they are starting to reach people who were never developers at all - the analyst who wants a custom data pipeline, the operations manager who needs a tool their ERP does not have, the team that is tired of waiting six months for IT to prioritise their request. The barrier between problem and solution has collapsed, and that is genuinely exciting. It is also, from a security perspective, quietly terrifying.

We have been here before - sort of

The closest parallel is not the printing press. It is the spreadsheet.

When VisiCalc arrived in 1979, Lotus 1-2-3 followed and Excel eventually won the war - accountants and analysts suddenly had the power to build complex financial models without needing a programmer. The results were both brilliant and chaotic. Finance departments created elaborate, mission-critical spreadsheets that lived on individual laptops, had no version control, were understood by exactly one person, and occasionally contained errors that nobody noticed for years. 'Shadow IT' was not yet a term, but the thing it describes was born in those Excel files.

Blogging did something similar to media. Before it, publishing required infrastructure - printing presses, distribution networks, editorial structures. After it, anyone with an opinion and a Blogger account could reach an audience. Most of what got published was disposable. Some of it was transformative. The gatekeepers never fully recovered.

AI-generated software is the next iteration of this pattern: democratise a capability that used to require specialist skill, watch what happens when millions of people can suddenly do it, deal with the consequences later. The spreadsheet gave non-programmers superpowers and created the shadow IT problem. Blogging gave non-journalists superpowers and created the misinformation problem. AI coding tools are giving non-developers superpowers. The problem that follows is still taking shape.

From bazaar to ghost market

Eric Raymond's 1999 essay 'The Cathedral and the Bazaar'1 described two philosophies of software development. The cathedral: planned, structured, released when ready. The bazaar: open, chaotic, developed in public with constant iteration. The bazaar won in infrastructure and tooling - most of the internet's plumbing runs on open source. But enterprise software stayed firmly cathedral: controlled, versioned, and released on someone else's server and schedule.

What we are moving into now is something stranger: a ghost market. Stalls appear overnight - a small tool to parse invoices, a script to generate reports, a bot to summarise Slack threads. They serve their purpose. Then they vanish, or they linger, unmaintained, in some corner of the organisation that nobody maps. Unlike the bazaar, which had at least some continuity, the ghost market is defined by constant turnover. Nothing stays long enough to be understood.

When everything is an anomaly, nothing is

Security operations, at their core, depend on understanding what normal looks like. You build a baseline - which systems talk to which, what data flows where, which users do what - and then you look for deviations. Anomaly detection only works when you have a stable model of 'not-anomalous' to compare against.

Throwaway software breaks this entirely. If every team can spin up new tooling on demand - tools that call APIs, move data, authenticate against internal systems - the baseline becomes meaningless. Every new tool looks like an anomaly. Every new data flow looks suspicious. Your detection logic cannot distinguish between 'legitimate new automation' and 'something has gone badly wrong', because both look identical.

This is not a problem you solve with 'zero trust' as a slogan, or by reminding people to 'protect the data at its core'. Those principles remain sound, but they were designed for a world where software was deployed deliberately, reviewed by someone, and reasonably persistent. They need updating for a world where a marketing manager builds a data pipeline on a Thursday afternoon and nobody else knows it exists.

The more uncomfortable implication: the security tooling we have built assumes a relatively stable attack surface. If the attack surface is now dynamic by default - constantly expanding and contracting as individuals create and delete tooling - then the entire monitoring model needs rethinking. You cannot protect what you cannot see, and you cannot see what appears and disappears faster than your asset management can track it. And let's face it: most organisations were already losing that race before AI entered the picture…

Enterprise software brought this on itself

Here is the uncomfortable truth that the enterprise software industry would rather not discuss: the reason people are reaching for AI-generated throwaway tools is that the alternative is awful.

Enterprise software is, almost by definition, optimised for somebody else's use case. It is configurable, which is not the same as flexible. It can do a thousand things, which is not the same as doing the one thing you actually need. The gap between 'what the software does' and 'what I want it to do' is typically bridged by either a consultant, a six-month implementation project, or a workaround that involves exporting to Excel and doing the rest manually.

AI coding tools offer something enterprise software has never really managed: software that does exactly what you need, right now, at essentially no cost. The quality might be lower. The security posture is certainly worse. But the fit-to-purpose is, for many use cases, dramatically better. That is a trade-off many people will happily make, especially if the security consequences fall on someone else's team.

So what do we do with this?

I am not arguing against the democratisation of software creation. The joy of turning an idea into a working thing on a Saturday afternoon is real, and the potential to get software that actually fits what you need - rather than a generalised approximation of it - is genuinely valuable.

But the gap between 'individuals can now build anything' and 'organisations are ready for this' is wide. Security teams are still trying to map assets that were deployed deliberately, by professionals, through controlled processes. The ghost market has not fully arrived yet. When it does, monitoring an attack surface that changes hourly will require a fundamentally different model - less 'detect anomalies against a stable baseline', more 'understand intent and authorisation in real time'.

The spreadsheet created shadow IT. We spent twenty years building governance frameworks to manage it, and we are still not done. AI-generated throwaway software is the next version of that problem, arriving faster and at greater scale - and with a sustainability cost nobody has yet budgeted for: Every organisation that builds its own invoice parser is also, whether it realises it or not, maintaining its own invoice parser - patching it, updating it when an API changes, abandoning it when the person who built it moves on. Multiply that by a thousand throwaway tools across a mid-sized organisation and you have a problem that goes well beyond security.

The organisations that get ahead of it will be the ones that stop trying to prevent individuals from building things - that ship has sailed - and start building the infrastructure to make it safe when they do. That means technical controls, yes, but it also means investing in the human early warning network: the people who know their environment well enough to notice when something doesn't feel right. In a world where no baseline exists, that instinct may be your most reliable sensor.


1. The Cathedral and the Bazaar

Cover photo by YOUSIF via Pexels