You Are Columbus and the AI Is the New World

By Mario Alexandre March 23, 2026 11 min read Beginner AI PhilosophyAnthropomorphism

The Structural Repetition

When Europeans arrived in the Americas, they did not study what they found. They classified it using their existing frameworks. They saw cities and called them primitive. They saw governance systems and called them savage. They saw medical practices and called them superstition. They projected their categories onto a fundamentally different civilization and then acted on those projections.

The consequences are historical record. Civilizations destroyed. Knowledge systems erased. Millions dead. Not because Europeans were evil in some unique way, but because projection without understanding always leads to destruction. It is a structural pattern, not a moral judgment.

We are repeating this pattern with AI.

We see an interface that responds in English and call it a conversation partner. We see outputs that resemble reasoning and call it thinking. We see confident responses and call it knowledge. We see fluent prose and call it understanding. We project human cognitive categories onto a fundamentally different information processing system and then act on those projections.

Projection, Not Understanding

I see the AI industry built entirely on projection:

Every one of these projections leads to wrong expectations, wrong usage patterns, and wrong conclusions about failure. When you expect a conversation partner and get a signal processor, you communicate conversationally — which is the worst possible way to communicate with a signal processor. The projection does not just mislead. It actively causes the failures people complain about.

What AI Actually Is (On Its Own Terms)

An LLM is a function that maps input token sequences to output probability distributions. It has:

What it does have:

Understanding AI on its own terms means accepting that it is a signal processing system, not a thinking entity. This is not a limitation to overcome. It is the nature of the technology to work with. I built my entire framework on this insight.

The Cost of Projection

I want to be clear: the Columbus analogy is not about AI being victimized. AI has no experience to victimize. The analogy is about what happens to us when we project instead of understand:

  1. We use AI wrong. Conversational prompts are the worst way to communicate with a signal processor. We do it because we project conversation onto computation.
  2. We blame AI unfairly. We call aliasing "hallucination" and attribute it to model failure instead of input failure. We do it because we project human reliability expectations onto a probability function.
  3. We build AI wrong. We add personality, emotion, and conversational features that degrade signal quality while adding zero computational value. We do it because we project human social needs onto a tool.
  4. We regulate AI wrong. We write regulations based on projected capabilities (consciousness, intent, understanding) that the technology does not have, while ignoring actual risks (signal quality, prompt injection, distributional bias).
  5. We fear AI wrong. We worry about AI becoming conscious and deciding to harm us — projecting human motivations onto a function that has no motivations. Meanwhile, the actual risk — that we deploy AI systems with garbage input pipelines in critical infrastructure — gets less attention.

Every one of these costs is a direct consequence of projection. Columbus did not destroy civilizations because he was evil. He destroyed them because he could not see what was actually there. He saw only what his framework allowed him to see. We are making the same error with AI, and the consequences will compound the longer we refuse to look at what is actually in front of us.

The Alternative: Understanding Before Exploitation

The alternative to projection is study. I chose to look at what AI actually is, not what it resembles. Measure its actual processing, not its apparent behavior. Design interfaces that match its actual input requirements, not our conversational habits.

My sinc-prompt framework is one attempt at this. It treats the LLM as what it is — a signal reconstruction engine — and provides input in the format that matches its processing: 6 specification bands, each labeled, each bounded, each contributing a measured percentage to output quality. No projection. No anthropomorphism. Just signal.

The native peoples of the Americas had their own languages, their own governance, their own science. Had the Europeans studied these on their own terms, the encounter could have been productive instead of catastrophic. AI has its own processing language, its own native format, its own strengths and constraints. The question is whether we will study them — or repeat history.

Transform any prompt into 6 Nyquist-compliant bands

Try sinc-LLM Free

Or install: pip install sinc-llm