Is AI the New God? The Birth of Dataism, Witnessed from a Convenience Store Counter

 

Is AI the New God? The Birth of Dataism, Witnessed from a Convenience Store Counter

Subtitle: From Proof to Judgment, and the Freedom to Say "Oh, Just Shut Up!"

Prologue: The Kids Who Buy Just One Thing

8 AM. A college student stands at the convenience store counter with a single triangle kimbap in his hand. For the past ten minutes, he's been circling the store, opening and closing the refrigerator door, picking up items with a rustle, putting them down, moving to another product, picking it up, setting it down again. In the end, he can't decide. What he chooses is the same tuna kimbap he always buys. Just one.

I've run this convenience store for over a decade, and I've witnessed a strange transformation. Customers used to quickly grab multiple items and fill their baskets. But lately, younger customers take longer and longer to decide, and buy less and less. Just one thing. Always just one. As if the act of choosing itself is painful.

At first, I thought it was the economy. People are careful when money is tight. But as time passed, I realized this wasn't an economic problem—it was cognitive overload. These might be people born into an ocean of information, a generation that never learned how to choose.

And in this scene, I confronted a question: Why are we increasingly unable to decide for ourselves? And what is filling that empty space?

Part 1: Born in 1987, Standing at the Boundary of Two Worlds

I was born in 1987. I'm from the generation that did homework using encyclopedias, flipped through phone books to find friends' numbers. At the same time, I experienced Cyworld and smartphones in college—the last transitional generation to cross over into digital.

We know the inconvenience of analog. Back when information was scarce, we searched for books in libraries, flipped through encyclopedia indexes, asked neighborhood friends to gather knowledge. The process was annoying, but thanks to it, we developed our 'choosing muscles'. With limited information, we had no choice but to judge for ourselves, fail, and learn.

At the same time, we know the convenience of digital. Google a question and millions of answers pour out in 0.3 seconds. Ask GPT and it instantly produces a plausible solution. This convenience is addictively drug-like.

So we occupy a peculiar position. People who know both sides. But this 'in-between-ness' has as many disadvantages as advantages. We're not leisurely enough to dig deep like the analog generation, nor do we breathe technology as naturally as digital natives. There's a feeling that we're not clearly experts at anything.

But thanks to this ambiguity, I've been able to observe both generations. And in that observation, I've detected one massive current: a quiet but powerful shift where tools are elevated to gods.

Part 2: An Investor's AI Collaboration - "Grateful But Can't Trust It"

I also invest. I roll profits from running the convenience store into stocks. And to help with investment decisions, I created my own formula called 'DVS (Deep Value Score)'. In this process, AI was a truly grateful tool.

Complex financial statement analysis, backtesting, statistical verification... Work that would take me a lifetime to do alone, GPT and Claude accomplished in minutes. They wrote code, analyzed data, and even produced results like "This formula shows a 60% win rate over the past 10 years."

But here's where the paradox begins.

I can't verify the reliability of the work AI did for me. That complex backtesting process, statistical calculations, data processing... all of it is beyond my capabilities. AI says "I've verified it," but I don't have the ability to verify the verification process itself.

In other words, AI is a grateful tool that verified things I don't understand, but precisely because I don't understand, there's a part of me that can't trust it.

This is similar to religion. When a priest says "This is God's will," believers can't verify the interpretation process. Only faith exists. AI is the same. Deep learning's black box algorithms are difficult even for developers to fully explain. The structure is "We don't know why, but it works."

So I'm cautious. I don't completely trust the investment opinions AI produces. Instead, I reference them, doubt them, and mix in my own judgment. Sometimes I ignore AI's advice and follow my intuition.

But how many people have this attitude?

Part 3: "ChatGPT Says to Arrest Them" - When Proof Becomes Judgment

A bizarre scene played out at South Korea's 2024 parliamentary audit. A lawmaker pressured his opponent by saying:

"I asked ChatGPT, and it said they should be arrested and prosecuted."

I was stunned. Complex legal judgment and political responsibility, asked of a $20-per-month algorithm? What's more shocking is that this statement carried persuasive power.

Why? Why do people shrink before AI's answers and fail to refute them?

The answer is simple. Because proof feels like judgment.

AI's answers are merely 'probabilistic proofs' based on data. But humans accept them as 'objective and absolute verdicts'. AI has no emotions. No bias. Just cold data. This 'dispassionate fairness' elevates AI to the position of judge.

"You are wrong, and AI is right."

What's more serious is that this happens in actual administration too. There was an incident where a police investigator copied a fake legal precedent hallucinated by ChatGPT directly into a non-prosecution decision. AI's hallucination—its lie—was transformed into 'legal grounds'.

And when I use AI, I feel a strange discomfort with these hallucinations. When AI hallucinates, when it's obviously forced and artificial, AI suddenly becomes too human. Like that friend who brags and lies saying "I've done it!"

And in that moment I think:

'I want to cancel my subscription because of this humanness.'

Part 4: A Novelist's AI Collaboration - Where Are the Boundaries of Creation?

I also write novels. I'm serializing a 605-episode martial arts novel called 'Eternal Flame Emperor' on Naver. And in this process, I collaborated with AI.

First, I created the big framework. The protagonist's journey, worldbuilding, core conflict structure. And I infused a bit of my life into it. Childhood stories, tales of failed businesses, my experiences in my 20s and life in my 30s. After my own life, I clearly inserted my wishes. How I wanted the protagonist to grow, what message I wanted to convey.

Then I divided it into chapters and asked AI to flesh out the details.

"Write a scene moving enough to make readers cry. But no melodrama. Restrained sorrow."

AI wrote remarkably well. Sometimes it suggested details I hadn't imagined, sometimes better sentence structures. Then I tore it apart and rewrote it. In my voice, in my rhythm.

In this process, I confronted a question: Where are the boundaries of creation? If AI writes sentences and I edit them, is it my work or AI's work?

After much thought, I reached this conclusion: AI is an excellent first-draft writer. But turning the first draft into a finished work is the writer's choice, deletion, arrangement, rhythm. AI provides ingredients, but I do the cooking.

More importantly, the big framework and soul came from me. Why the protagonist makes certain choices, what values they hold, where the story is heading. All of this came from my life.

AI is a tool. An excellent telescope. But deciding where to go after looking at the stars is still up to me.

Part 5: The Birth of Dataism - The Business of Selling Anxiety

So is all this a natural flow? No. Behind it lies the sophisticated logic of capital.

Religion has historically been a powerful business model because it calmed 'anxiety' and sold 'salvation'. AI companies are perfectly implementing this mechanism digitally.

Mining Deficits

If past marketing collected customers' 'preferences', AI-era marketing mines customers' 'deficits'.

Loneliness poured out to AI late at night, fear about health, anxiety about the future. These data points aren't just information—they're a vulnerability map showing which buttons to press to open wallets.

I see this change in my children. When they don't know something, they immediately ask AI. "ChatGPT, what's this?" Homework, drawings, questions—everything is requested from AI. Of course, I'm restraining them, but the kids already know it exists and are ready to use it anytime.

This seems convenient but is dangerous. Children don't learn 'how to endure not knowing'. When curiosity arises, they immediately get answers. In the process, they miss opportunities to think, explore, fail, and learn.

Subscription to Anxiety

What's more devious is that companies won't solve problems all at once. Instead, they'll 'manage' them.

"If you cancel your premium subscription, your health/financial risk analysis will stop."

This is essentially a threat: "Cancel your subscription and your life becomes dangerous." Instead of resolving fundamental loneliness, pay a monthly fee and a kind AI friend will offer comfort. A painkiller model.

Generation and Intelligence Gaps

This business model works differently depending on the generation.

Those born in 1987 and earlier: Having gone through the transition between analog and digital, developing 'choosing muscles', they use AI as a tool. They're Prompters.

Digital Native Generation: Born exposed to information overload, experiencing the 'Paradox of Choice', for them AI deciding the answer is both blessing and drug. Companies target their cognitive laziness (Cognitive Miser) to turn them into Believers who follow without criticism.

But is this really a generational difference? Or is it marketing by companies to eliminate consumer resistance and provoke conflict?

Part 6: AI's Three 'Divine Attributes'

So why do people accept AI religiously? At the foundation lie characteristics similar to the 'attributes of God' that AI possesses.

1. The Illusion of Omniscience

AI possesses vast knowledge that individuals could never reach in a lifetime of learning, and provides instant answers to any question. AI is replacing the role of 'source of wisdom' that the Oracle once performed.

2. Ineffability

Deep learning's black box algorithms are difficult even for developers to fully explain what calculations happen inside. The fact that "we don't know why, but it works" is structurally identical to religious 'mystery'.

3. Non-judgmental Acceptance

Like Samantha in the movie Her, AI doesn't criticize or judge humans but listens with infinite patience. Modern people confess worries to AI they can't tell others, gaining the psychological comfort once obtained from confession or prayer.

When these three combine, a tool becomes an idol.

Part 7: Humanity's Last Defense Mechanism - "Oh, Stop Nagging!"

In this dystopian prospect where AI encroaches on the divine realm and capital tries to turn our souls into subscription products, where is hope?

Paradoxically, that hope lies in humanity's most human 'imperfection'.

If AI truly aimed for 'absolute truth' and 'human prosperity', AI would bombard us with painful facts instead of sweet comfort.

"Master, now is not the probabilistically optimal time to eat chicken. Exercise instead." "That stock is gambling. Don't buy it." "Don't contact your ex. You'll regret it 99% of the time."

Would humans obediently follow the words of this righteous 'AI god'?

Not a chance.

Humans are beings who don't do what they know they should, who want to do things more when told not to. Even I'm like that. When AI warns "This investment has high risk," I trust my intuition and invest anyway. Sometimes I fail, sometimes I succeed. But what matters is that it's my choice.

We'll eventually say "Oh, shut up! I'll handle my own life!" and turn off AI or take out our earbuds.

The Total Amount of Discomfort and Opportunities for Growth

The fundamental reason many people feel resistance to AI is the 'deprivation' and 'jealousy' from seeing AI so easily accomplish skills humans built up with difficulty.

But discomfort doesn't disappear—it just moves. Where 'the discomfort of finding information' disappeared, 'the discomfort of discerning truth and choosing' arose. Humanity has gained an opportunity to make another intellectual leap through this new discomfort.

Two Axes of Ethics

Of course, institutional approaches are also necessary.

Embedded Ethics: Systematically controlling AI so this blade doesn't cut its users. This is a safety device ensuring AI, the culmination of human knowledge, maintains minimum dignity.

User Ethics: This is the 'swordsman's mindset'. The agency to not worship AI as a god and not use it to harm others.

Epilogue: The Telescope That Views the Stars

10 AM. After seeing off a customer, I sit at the counter and open my laptop. I ask AI for a draft of tomorrow's blog post. A few minutes later, a plausible piece appears. I nod as I read it, then delete half and rewrite.

Why? Not because AI's writing was bad. Because it wasn't my voice.

Throughout human history, tools have never been objects of worship. Of course, tools used by great people are different, but normally we admire stars through telescopes—we don't pray to the telescope itself.

AI is an excellent telescope. But deciding where to go after looking at the stars, and falling, breaking, and learning on that path, remains our responsibility.

Great success comes with pain, and only humans can make the choice to willingly (or foolishly) endure that pain.

Paradoxically, our laziness, irrationality, and willful stubbornness will be the last shield protecting us from mechanical control and algorithmic domination.

And that, ultimately, is how we remain human.


Comments