1.01^n

Why ChatGPT answers feel right but teach you nothing

There is a specific kind of confidence that comes from receiving a well-formatted answer. A numbered list. Clear headers. Code that compiles. It looks like understanding. It is not.

The feeling of knowing

When you ask ChatGPT why a database query is slow and it gives you a detailed explanation with index strategies and query plan analysis, something happens in your brain: the discomfort of not knowing disappears. The problem feels solved. You copy the suggestion, apply it, it works. Done.

But if someone asks you a week later -- why was that query slow? -- you will probably struggle. Not because you forgot, but because you never knew. You borrowed the answer without building the understanding.

This is not a character flaw. It is how the tool works. The model is optimized to produce outputs that satisfy. Satisfaction and understanding are different things.

What understanding actually requires

Understanding a system requires making predictions about it, being wrong, and updating your model. It requires the friction of being stuck, the effort of finding your way out, and the memory of having done so.

When you debug something yourself -- really debug it, read the stack trace, form a hypothesis, test it, be wrong, try again -- you build something that does not decay quickly. You build a mental model that transfers to the next problem, and the one after that.

The answer you get from a model does not come with that friction. It comes without the being-wrong part.

The invisible gap

The dangerous thing about this gap is that it stays invisible for a long time. When the system is working, borrowed knowledge is indistinguishable from real knowledge. You can produce output. You can answer questions in meetings. You can build things.

The gap surfaces when something breaks in an unexpected way. When the model gives you two contradictory suggestions and you have to decide which is right. When there is no prompt that will get you out of the situation -- only judgment, built from experience.

Most people discover the gap at the worst possible time.

This is not an argument against the tools

The tools are useful. The question is what you use them for.

There is a difference between using a model to explain something you already mostly understand -- to fill in a detail, to get a second opinion -- and using it to skip the process of understanding entirely. The first builds on a foundation. The second builds on sand.

The engineers who will be most valuable in ten years are not the ones who learned to prompt well. They are the ones who used the tools without letting the tools do the learning for them.

Reading slowly. Thinking through problems before asking. Being comfortable with not knowing yet. These are not inefficiencies. They are the work.