Endesa's Access Disaster: When Non-Technical People Vibecode Production
In 2000, a Spanish utility giant billed its customers through a Microsoft Access database built by a non-technical employee and her husband. A year later, entire shopping centers hadn't been charged. The pattern repeats today—faster and with worse tools.
In 2000, the Spanish electricity market was about to be liberalized. Endesa—one of Europe’s largest utilities—was not ready. Their billing system could not handle the new regulatory framework. They had weeks, not months, to ship something.
So Endesa did what every cornered company does: they vibecoded it.
In 2000, vibecoding was done in Microsoft Access. And the person who built Endesa’s new billing system for the freshly-liberalized market was not a software engineer. She was a non-technical employee on the team, working evenings with help from her husband.
Yes. Endesa. On Access. By a non-coder. Built at home.
It worked. For a while.
With few clients and a single user, the system ran fine. Invoices went out. Payments came in. Everyone congratulated the team on pulling off the impossible in a few weeks.
Then the company started adding real clients. The database corrupted. Locks froze the system. Queries took minutes. Instead of pausing to bring in an engineer, management doubled down: they migrated the whole thing to GuptaDB—an obscure database almost nobody in Spain knew how to administer. Moving deckchairs. The hull was still full of water.
That is when they called me.
The problem was not scalability. That was only the visible layer. The real damage was in the data model. Entire shopping centers—clients paying six figures a month to Endesa—had not been billed for an entire year. Twelve months of unbilled electricity, consumed and delivered, sitting in a corrupt Access table that nobody fully understood.
Let that number sit with you for a moment.
A full year of unbilled commercial electricity.
This is the story I think about every time someone in 2026 tells me they are going to “just use Cursor” to automate an internal tool.
The Pattern Never Changed
Twenty-six years after Endesa’s Access disaster, the pattern repeats in companies across every industry. The tools got sharper. The mistakes got faster. The blast radius got bigger.
In 2026, when a non-technical person needs to automate something, they no longer reach for Microsoft Access. They reach for:
- Cursor with a connection to the production database
- Claude Code with AWS credentials on their laptop
- A custom agent they built over a weekend that reads Stripe, Salesforce, and Slack
- A Zapier workflow that has grown into a 200-step Frankenstein nobody maintains
- A Notion page of prompts that everyone copy-pastes into ChatGPT
And the same four things happen that happened at Endesa.
One. It works with small data. Ten clients, one user, a toy dataset. Everyone is impressed.
Two. It starts to crack under real load. Queries get slow. Edge cases appear. Integrations fail silently. The person who built it scrambles to patch.
Three. Management doubles down instead of bringing in an expert. “We have already invested so much. The new tool is almost working. Let’s migrate to GuptaDB / add vector search / switch to a different LLM.” The real problem is ignored.
Four. Something catastrophic is discovered months later. Unpaid invoices. Duplicated records. GDPR violations. Customer data leaked into training prompts. A compliance auditor asks a question nobody can answer.
Endesa’s year of unbilled shopping centers was discovered in 2001. Most modern companies will not discover their equivalent disaster until their first external audit, their first security incident, or their first angry enterprise customer.
Why Non-Technical People’s Vibecoding Fails In Production
There is a reason this pattern repeats across decades and tools. It is not that non-technical people are bad at their jobs. They are often brilliant at their jobs. The problem is that software engineering is an invisible discipline with invisible pitfalls.
Data modeling. The Endesa engineer wrote a perfectly reasonable customer-billing-address relationship. What she did not model—because she had no reason to know she needed to—was that a single physical shopping center could be one legal entity, split across multiple business units, renting to dozens of tenants, some of whom had their own sub-meters. Her Access schema could not express that. So when a shopping center came in with complex structure, the record looked “saved” but had no valid reference to a billable customer. The invoice never triggered. The invoice never failed either, because there was nothing to fail. Silent year-long loss.
A Product Engineer would have modeled it correctly on day one. Not because they are smarter. Because they have seen the shapes customer relationships actually take in production, and they know that “one customer, one address” is the exception, not the rule.
Concurrency and state. Vibecoded systems almost always assume one user at a time. They work beautifully in single-user mode. They break the moment two people edit the same record. They break worse when a background process reads data mid-update. Microsoft Access in 2000 exposed this. Modern AI-coded systems hide it, which is worse: the bug still exists, but now it surfaces intermittently in production, six months after you deployed, in a way that looks like “the system is flaky.”
Security. A non-technical person building with AI has zero hope of knowing where their SQL injection risk is, how their OAuth tokens are stored, whether their API keys leaked into a public GitHub repo, whether their agent is vulnerable to prompt injection, or whether their customer data is flowing to third-party LLM providers without a data processing agreement. These are not hypothetical risks. They are the default state of an AI-coded system with nobody technical watching.
The “it works” delusion. This is the deepest trap. When a system returns the right answer on a Tuesday afternoon demo, everyone assumes it is working. Systems do not work or not work. They work under specific conditions. Change the conditions—more users, different data, network failure, backup/restore—and “working” becomes “broken” with no warning.
Endesa’s billing “worked” for the first six months. It never once returned an error. It was silently dropping a massive fraction of its revenue.
Why AI Makes This Worse, Not Better
Here is the counterintuitive part. AI does not fix the non-technical vibecoding problem. It amplifies it.
Microsoft Access in 2000 had one mercy: the code it generated looked obviously amateur. Ugly SQL, weird macros, funky VBA. Anyone walking by could see something was off.
Cursor and Claude Code in 2026 produce code that looks professional. Clean TypeScript. Properly structured. Good variable names. Decent tests. A senior engineer could review the same file and not immediately notice anything wrong, because the surface is polished. The rot is underneath: a data model that falls apart at scale, auth that would fail a five-minute security review, an integration that costs €12,000 in hidden API fees per month because nobody implemented caching.
Polished rot is harder to detect than visible rot. Which means it survives longer. Which means when it finally explodes, more time has passed and more damage has accumulated.
AI gives non-technical people the aesthetic of competence without the substance. This is far more dangerous than the old status quo, where the same person’s output looked like amateur work and triggered appropriate skepticism.
The Escalation Trap
The Endesa story has a second failure mode that matters more than the first. The migration from Access to GuptaDB.
When the original system started breaking, the correct move was obvious in hindsight: bring in an engineer. Audit the data. Understand what was actually happening. Rebuild the foundation properly.
Management did not do that. They did what every company does when they have already sunk cost into a bad system: they tried to save the bad system by moving it to a better tool. Access failed? Let us migrate to GuptaDB. That will fix it.
It did not fix it. Because the problem was never the tool. The problem was the absence of engineering judgment at the moment the system was designed.
In 2026, the modern version of this mistake is everywhere:
- “The Zapier workflow is getting slow. Let’s rewrite it in n8n.”
- “The Airtable is too limited. Let’s migrate to a real database.”
- “The agent is hallucinating. Let’s switch from GPT-4 to Claude.”
- “The codebase is spaghetti. Let’s rebuild it with Cursor and Claude Code from scratch.”
Each migration looks productive. Each migration delays the actual conversation, which is: we never had an engineer on this in the first place, and every further investment compounds the original mistake.
What Responsible Enablement Looks Like
None of this means non-technical people should be forbidden from using AI tools. The opposite is true. In 2026, denying your marketing team access to Cursor is the equivalent of denying them spreadsheets in 1995. You will lose productivity and talent.
The answer is not restriction. The answer is scaffolding.
The AI Enablement Engineer role exists for exactly this situation. When a marketing ops person wants to automate lead qualification, they should not be given raw database credentials and a prompt. They should be given a controlled MCP server that wraps the database with audit logging, row limits, and dry-run modes. They should be given skills that encode how lead qualification actually works at this company. They should be given a spec-driven workflow that forces them to describe what they are building before they build it.
The Three Layers apply more urgently to non-technical vibecoders than to engineers. Engineers at least know to be afraid of DELETE FROM users. The non-technical person has no idea. Layer 1 controls have to catch both.
What a responsible enablement stack looks like when your marketing ops person wants to “just automate this”:
- Controlled Execution. The marketing automation runs through an internal MCP that logs every call, enforces rate limits, and rejects any operation that would touch customer records without dry-run approval.
- Controlled Workflows. A skill called
/draft-campaign-automationwalks them through the process with the right guardrails. They cannot forget to check for GDPR implications. They cannot forget to test on staging first. - Controlled Specification. Before any new automation ships, a one-pager describes what it does, what data it touches, and what the rollback is. A senior person signs off. If it is small and safe, the sign-off takes ten minutes.
This is not bureaucracy. This is the minimum viable structure that prevents your company’s 2026 equivalent of a year of unbilled shopping centers.
The Question Your Company Should Be Asking
Endesa’s Access disaster took a year to surface. When it did, it required an external engineer, a forensic investigation of corrupt data, a schema rebuild, and a reconciliation against a year of delivered electricity. The cost ran into millions of euros. The company survived because they were Endesa, they were regulated, and they had deep pockets.
Most modern companies do not have any of those buffers.
Somewhere in your organization right now, one of the following is likely true:
- A marketing ops person has built a Zapier flow that writes to your production database and nobody technical has reviewed it
- A support team is running a ChatGPT-based automation that has access to full customer records and the logs live in someone’s personal account
- A finance team has a Cursor-generated Python script that processes invoices, stored on a shared Google Drive, with hardcoded credentials in comments
- A product manager has shipped a V0-prototyped tool to beta customers without anyone looking at the auth implementation
None of these feel like Endesa’s Access disaster. That is the point. They did not feel like it in 2000 either. It felt like competent people getting things done with the tools they had.
The question is not whether your company has this problem. The question is how many months of silent unbilled shopping centers you are accumulating right now, and whether you are going to find them before your external auditor does.
The Closing Beat
In 2000, Endesa called me after a year. The damage was contained. The company learned the lesson—or at least, that one team did.
In 2026, the same pattern is running across thousands of companies simultaneously, at ten times the speed, with tools that hide their flaws behind a veneer of professional-looking output. The year of unbilled shopping centers is no longer the worst-case scenario. It is the best case scenario, because at least the Endesa damage was recoverable.
The AI-era version of this story has not been written yet. It is being written right now, in production systems across your company and mine, by people who do not know they are writing it.
Build the three layers. Hire the enablement engineer. Give your non-technical team the power of AI with the scaffolding to use it safely.
Because if Endesa happened with Microsoft Access, twenty-six years ago, without the multiplier of modern AI—imagine what is happening right now, on your stack, under your nose, that you will find out about in 2027.
The full catalog of vibecoding disasters — 12 patterns to audit against →
The three layers that make it safe →
Why AI accelerates every mistake →
The engineer who vibecoded Endesa’s billing system was not the villain. She was solving an impossible problem with the tools she had. The villains were the managers who did not replace her with an engineer after the first signs of trouble. That mistake is cheaper than ever to avoid today. It is also easier than ever to make.
John Macias
Author of The Broken Telephone