A Series on Future Dystopias โ€” No. 1

The Looming Paywall: Is AI a Human Right?

There is a question that nobody is asking yet, but everybody will be asking soon: at what point does a technology become so fundamental to daily life that restricting access to it becomes a moral problem?

April 2026โ—†9 min readโ—†AI economics
Podcast Overview

Listen to the Podcast!

This podcast version walks through the coming AI access divide, why the subsidy era may not last, and what it would mean if meaningful cognitive leverage becomes increasingly paywalled.

Streaming audioโ—†Loading...
Loading...
0:00--:--
1 ยท The question

There is a question nobody is asking yet.

There is a question that nobody is asking yet, but everybody will be asking soon: at what point does a technology become so fundamental to daily life that restricting access to it becomes a moral problem?

We are not there yet with AI. But we are heading there faster than most people realize.

2 ยท The Google analogy

Search made access feel free. AI may make access feel conditional again.

Think back to what life was like before Google. Not ancient history โ€” just the mid-nineties. If you wanted to know something, you had a few options. You could go to a library. You could ask someone who knew more than you. You could buy a book. Information was available, but it required effort, access, and often money. Knowledge had friction built into it, and that friction was not distributed equally.

Then Google arrived and, for most practical purposes, abolished that friction. You could type a question into a box and get an answer in under a second. The quality of the answer depended on what you already knew โ€” how to phrase the question, how to evaluate the results โ€” but the access itself was free. Completely, unconditionally free.

Nobody talks about this enough. Google is a trillion-dollar company that gives its core product away at no charge to the user. The bargain is well understood: you pay with your data, your attention, your behavioral patterns. Google sells those to advertisers. The system works because the scale is enormous and the marginal cost of serving one more search is essentially zero.

Now imagine you opened Google tomorrow and after your third search result a message appeared: You have reached your daily limit. Upgrade to search more. You click through to the pricing page and discover that searches are billed by token โ€” a unit of measurement that means almost nothing to a normal person. One thousand tokens might get you two detailed results or twenty short ones. It depends on what you asked. When you run out, you run out. If you cannot afford to buy more, that is simply your problem.

That scenario sounds absurd when applied to Google. It would feel like being charged to breathe. But that is almost exactly the model we are building with AI โ€” and we are building it quietly, without much public debate, while the technology is still new enough that most people do not yet understand what they are about to lose access to.

3 ยท The narrowing free tier

The equalizer effect is already shrinking.

When ChatGPT launched publicly in late 2022, it was free. Not free with caveats. Just free. You made an account and it answered your questions, wrote your emails, explained your medical test results in plain language, helped your kid with homework, drafted your cover letter, debugged your code. For people who had never had access to a personal assistant, a writing tutor, a patient explainer of complicated things, it was a remarkable equalizer.

The free tier still exists in 2026, but it is a narrower door than it used to be. The usage limits are lower. The most capable models sit behind a paywall. The features that make the technology genuinely useful โ€” longer context, better reasoning, image generation, integration with other tools โ€” are increasingly reserved for paying subscribers.

Twenty dollars a month is not much money, in the way that twenty dollars a month is never much money when you have enough money. For hundreds of millions of people worldwide, twenty dollars a month is a meaningful expense. For the roughly three billion people living on under ten dollars a day, it is not an expense at all โ€” it is simply unavailable.

And twenty dollars a month is just the beginning.

4 ยท The subsidy ending

The current economics are not stable.

Here is what I think is actually happening, and it does not require any conspiracy. It is just the logic of a market working itself out.

The AI companies are, right now, massively subsidizing their products. The true cost of what they are providing โ€” the compute, the energy, the engineering, the training data, the infrastructure โ€” is far higher than what users pay. The $20 tier is probably worth something closer to $1,000 in actual resource terms. The $200 tier is probably worth closer to $20,000. This is not sustainable in the long run, and everybody in the industry knows it.

The current subsidy exists for a reason: the companies need your data. Every conversation you have, every document you generate, every correction you make trains the models to be better. The billions of users who signed up for free or near-free accounts were not just customers. They were, without really understanding it, a workforce. They were generating the training signal that turned a promising technology into a transformative one.

That phase is ending. The models are now good enough. The training dividend from mass free access is diminishing. The infrastructure costs are not. At some point โ€” and I think we will see the early signs of this in 2027 โ€” the economics have to come into balance, and balance means prices go up.

Why the subsidy existed

Mass usage created training signal. Every conversation, correction, and document improved the product and widened the moat.

Why it may end

The training dividend declines as models mature, while compute, energy, engineering, and infrastructure costs remain high.

What follows

Balance means higher prices, sharper segmentation, and more meaningful differences between what free, mid-tier, and premium users can actually do.

5 ยท The coming tiers

If the market settles, access will stratify.

What does that look like in practice? I think it looks something like this.

Ordinary users
$100/mo

Question answering, summaries, drafting, and day-to-day cognitive assistance.

Builders
$300/mo

Prototyping, app scaffolding, tool-building, and technical exploration without full engineering teams.

Working professionals
$1,000/mo

Solo operators, small firms, and specialists whose production capacity increasingly depends on AI.

Enterprise
Negotiated

Hospitals, banks, logistics firms, and large organizations buying reliability, integration, and privileged access.

At each level, you get more access, more capability, more reliability, more integration. At each level, the people who cannot afford it fall away.

6 ยท Why AI differs

This is not just another software subscription problem.

None of this makes the AI companies villains. They are not doing something unusual. They are doing what every technology company does: building a product, subsidizing adoption, and eventually pricing it to reflect its value. We went through this with software, with mobile data, with cloud storage. The pattern is familiar.

But AI is different from those things in a specific way that I think we have not fully reckoned with.

When Microsoft Office became expensive, people switched to Google Docs or LibreOffice. When Spotify raised its prices, people had options. The underlying capability โ€” word processing, music streaming โ€” was available through alternatives. Competition kept the floor accessible.

AI is consolidating in a way that makes genuine competition harder. The models that matter are being built by a very small number of organizations with access to very large amounts of capital and compute. The open-source alternatives exist and matter, but they lag behind the frontier models by a significant margin, and the gap may be widening. What you get from the leading commercial AI today is not easily replicated cheaply.

More importantly, AI is not like other software. It is not a tool for doing a specific task. It is increasingly a tool for doing almost any task โ€” writing, research, coding, analysis, planning, communication, learning. It is becoming the interface through which people interact with information itself. When that tool becomes expensive, the cost is not just inconvenience. The cost is a widening gap between what different people are able to do and know and produce.

That gap already exists, of course. Money has always bought access to better lawyers, better doctors, better schools, better information. Nothing about this is new. What is new is the scale and the speed. What used to require a team of specialists can now be done by one person with a capable AI. The people who can afford that AI are going to be extraordinarily more productive than the people who cannot. Not a little more productive. An order of magnitude more productive.

That is the kind of divergence that does not just affect individuals. It reshapes labor markets, educational outcomes, political power, the distribution of wealth. It is civilizational in scope.