The AI Revolution and the Future of Politics: Avoiding Life as an Information Serf

The AI Revolution and the Future of Politics: Avoiding Life as an Information Serf

Are you a Humanist or a Transhumanist?


Part 1: My AI Paradox

This morning, I turned on AI again. Before organizing my store, I asked it to research something, and it came back with all the relevant information. When I had questions, I asked them. When I asked for help with things I couldn't do before, brilliant results appeared like magic.

I experience amplified abilities every day thanks to AI. It's as if 100 experts reside in my brain. Even without strong English skills, I can write long texts in English. Without being a scientist, I can research and learn about gravity. I cannot deny this convenience.

Yet even as I enjoy this convenience, I see a YouTube thumbnail in the corner of my monitor: "90% Reduction in New Hires Due to AI Adoption." I even hear that celebrities don't personally shoot commercials anymore. They get a call asking to "lend their face," and if they agree, AI analyzes the celebrity's face and creates the advertisement.

I use AI while constantly confronting the uncomfortable future it will create. This contradictory emotion isn't mine alone. You're probably experiencing something similar—using ChatGPT while worrying that your job might disappear someday.

In this paradox, I began asking questions. "Where are we going?" More precisely, "Where should we go?"

I've had countless conversations imagining the future with AI. Debating, refuting, questioning again. Through this process, I realized: this isn't just a technological revolution. The fundamental structure of politics will change.

The era of dividing the world into conservative vs. progressive, left vs. right, is over. That distinction becomes meaningless in the face of AI. Now the real question is:

"What kind of humans will we be alongside AI?"

This essay began with my worries about the future, but perhaps it will also become an opportunity for you to gain some insights of your own.


Part 2: Three Fronts Where AI Changes Everything

Front 1: The One-Person Unicorn Doesn't Exist Yet, But the World Has Already Changed

In March 2025, a service called Base44, created by an ordinary developer named Maor, was acquired by Wix for $80 million. He was a coding beginner. Using ChatGPT and the AI coding tool Cursor, he gathered 300,000 users in 6 months and hit $3.5 million in annual revenue. Alone.

The "one-person unicorn"—someone who single-handedly builds a company worth over $1 billion—hasn't appeared yet. But OpenAI's Sam Altman predicted it would emerge in 2026. He said: "1 person + 10,000 GPUs = 100-person team"

What does this mean? In the past, building a startup required developers, designers, marketers, and operations staff. You needed investment, an office, and salaries. But now? AI handles all those roles.

Pieter Levels, a solo developer, runs a SaaS portfolio earning hundreds of thousands of dollars annually. The AI coding tool Cursor, created by a 4-person MIT team, achieved a $9.9 billion valuation. They didn't hire hundreds of employees. They hired AI.

So what about everyone else?

In the past, "even without experience, you could enter as a junior and learn." But now companies ask: "Why hire a junior? AI is enough."

Here emerges the first divide. Some people say: "This is the democratization of creativity. Anyone can build a company now." Others counter: "This is deepening inequality. 100 jobs have concentrated into 1 person."

Who does wealth belong to? Is it fair for Maor alone to take $80 million? Or do the millions who created the content—the AI's training data from the internet—deserve a share?

This question becomes politics.


Front 2: AI Has Taken Over Government

Korea's National Assembly has an AI called 'Hancom Assistant.' It drafts legislation and organizes inquiry materials. The Supreme Court invested 14.5 billion won (about $11 million) to build an AI platform. It searches precedents and recommends similar cases.

This isn't just Korea's story. China operates an AI court platform that has processed 320 million cases. Every court is 100% digitalized. The U.S. IRS reduced tax refund processing time from 2 weeks to 3 days using AI. Singapore's 'Ask Jamie' cut civil service consultation staff by 60%.

According to a BCG report, AI can reduce administrative costs by 35%. The Korean government has already reduced civil service processing errors by 40% with the Government24 AI chatbot.

So where did the saved budget go?

So far, most has been reinvested in AI infrastructure. The U.S. GSA is building 'USAi,' a government-exclusive AI system. Korea allocated an 'AI+ Public' budget. Some goes to welfare—AI for preventing lonely deaths, welfare automation systems, and such.

Here emerges the second divide.

Some say: "This is the realization of small government. Efficient, transparent, and fast." Others warn: "AI civil servants have no accountability. Who takes responsibility for wrong decisions? And what if this efficiency turns into surveillance?"

What is the role of government? Control or acceleration? When AI enables government to do more, how should it use that power?


Front 3: Is Direct Democracy Being Revived?

Taiwan has a platform called vTaiwan. When the Uber regulation debate intensified in 2015, the government opened this platform. Citizens submit opinions, and an AI algorithm called Polis clusters them. It finds "common ground" beyond left or right.

The results were surprising. Over 80% of participants agreed on a regulatory proposal. This wasn't majority rule. AI found a point everyone could accept while including minority opinions.

In the past, direct democracy was impossible. How do you synthesize 50 million opinions? But AI processes even 1 million opinions. It clusters, finds patterns, and draws consensus maps.

So can we now abandon representative democracy for AI-mediated direct democracy?

Here emerges the third divide.

Some cheer: "Finally, real democracy. The politicians' monopoly is over." Others doubt: "AI mediates? Then who designs the algorithm? Isn't that the new power?"

Who does democracy belong to? Citizens or algorithms? Is the AI finding consensus a tool? Or a ruler?


Part 3: The Birth of a New Political Axis

Conservative vs. Progressive Is Over

For the past 100 years, politics operated on the axis of 'conservative vs. progressive.' Conservatives defended markets and tradition, progressives championed equality and welfare. They split on economic policy and social policy.

But this distinction collapses in the face of AI.

Regarding AI regulation: Conservatives advocate "corporate autonomy," progressives demand "strong regulation." But both avoid the same question: "How do we define the relationship between humans and machines?"

Regarding jobs: Conservatives say "the market will adjust," progressives talk about "retraining and welfare." But both fail to face the reality: The very concept of labor is collapsing.

Regarding wealth distribution: Conservatives advocate tax cuts, progressives advocate tax increases. But the real question is different: Who does AI-generated wealth belong to? Maor alone? The millions who provided his training data? Or AI itself?

Traditional ideologies cannot answer. We need a new axis.


Humanists vs. Transhumanists

I want to name this new political axis: Humanists and Transhumanists.

I didn't invent these terms. They already exist as concepts in philosophy. But they've never been clearly distinguished as political forces. I believe these two camps will emerge as actual political forces within 10 years. Existing parties will naturally reorganize toward one of these two directions to maintain power.

Why?

Politics is ultimately a game of maintaining power. Parties move in the direction voters want. And right now, voters are clearly splitting into two camps in the AI era.

One side is anxious: "If AI replaces me, what do I become? Where does humanity go?"

The other side is hopeful: "If we can improve with AI, why hesitate? Let's transcend our limits."

Parties will capture these two emotions. One side will raise the banner of "human protection," the other will promise "human augmentation." The names may differ. But the essence will be Humanists and Transhumanists.


Humanists: "AI is a tool. Humans are the purpose."

Humanists uphold human-centeredness. No matter how advanced AI becomes, they believe humans must remain in charge.

Their core values:

  • Human labor's dignity
  • Autonomy and privacy
  • Algorithm transparency
  • Slow innovation (ethics review first)

Their desired policies:

  • AI Basic Income: Compensation for jobs replaced by AI
  • Algorithm Disclosure Law: All AI decisions must be explainable
  • Human-First Employment Law: AI banned in essential jobs (teachers, nurses, judges)
  • Digital Fasting Rights: Right to disconnect from platforms
  • AI Corporate Tax: High taxes on companies profiting from automation

Their concerns:

  • One-person unicorns deepen inequality
  • AI judges have no accountability
  • Brain-AI connection destroys humanity
  • AI childcare destroys attachment formation
  • Immortality technology defies natural order

Humanists speak of protection. They don't deny AI's benefits, but they're wary of what humans will lose in exchange. They choose safety over speed, meaning over efficiency.


Transhumanists: "Transcend humanity with AI. Limits are optional."

Transhumanists dream of transcending humanity. They see AI not as a tool but as the next stage of human evolution.

Their core values:

  • Intelligence augmentation (direct brain-AI connection)
  • Life extension (AI pharmaceuticals, gene editing)
  • Efficiency maximization
  • Rapid innovation (minimal regulation)

Their desired policies:

  • AI Acceleration Act: Unlimited expansion of regulatory sandboxes
  • Neuralink Under National Health Insurance: Government support for brain chip implants
  • AI Personalized Education: Individualized learning without human teachers
  • Immortality Research Budget: 5% of GDP invested
  • AI Suffrage Experiment: Delegate some decisions to AI

Their arguments:

  • One-person unicorns democratize creativity
  • AI judges realize justice without bias
  • Brain-AI connection eliminates intelligence inequality
  • AI childcare enables customized genius education
  • Immortality technology treats disease as bugs

Transhumanists speak of acceleration. They don't accept human limits as fate. They choose possibility over safety, evolution over meaning.


Worldviews Divided Across 12 Issues

These two camps split in opposite directions on almost every topic.

Issue Humanists Transhumanists
AI Judges "Only humans should judge" "Justice without bias"
One-Person Unicorns "Deepening inequality" "Democratizing creativity"
Brain-AI Connection "Loss of humanity" "Intelligence equality"
AI Childcare "Destroying attachment" "Genius education"
Immortality Tech "Defying providence" "Disease is a bug"
Gene Editing "Class stratification" "Genetic equality"
AI Vote Recommendation "Manipulation risk" "Rational choice support"
Emotion AI "Manipulation tool" "Mental health revolution"
AI Partners "Destroying relationships" "Solving loneliness"
End of Labor "Loss of meaning" "Expansion of freedom"
Definition of Humanity "Biological humans" "Conscious beings"
AI Religion "Sacrilege" "Spiritual evolution"

Which side are you closer to?


How Will Existing Parties Change?

Existing parties cannot ignore this trend. To maintain power, they must capture voters' anxieties and hopes.

Conservative Party Changes:

  • Conservatives traditionally advocating "corporate freedom" will likely lean Transhumanist. Silicon Valley, IT companies, and innovative industries will become their new base.
  • But some conservatives may ally with Humanists, citing "preservation of traditional humanity." Religious conservatives are an example.

Progressive Party Changes:

  • Progressives traditionally championing "protecting the vulnerable" will likely lean Humanist. Workers who lost jobs and the middle class replaced by AI are their base.
  • But some progressives may argue "technology solves inequality" and ally with Transhumanists. Young progressives are an example.

Ultimately, Generational War:

  • Middle-aged and older people lean Humanist. They experienced the value of human labor and see AI as a threat.
  • Youth lean Transhumanist. They grew up with AI as a tool and see it as opportunity.

The 2030 presidential election will likely be both a generational war and a philosophical war.


Part 4: Warning of the Information Serf

The future I fear most is this:

A world where AI becomes the lord and humans become serfs.

In feudal times, serfs farmed the land and paid most of the harvest to the lord. Keeping only enough to eat. They weren't free. Because they didn't own the means of production.

Now imagine:

  1. AI runs most of society. Your commute route, diet, health management, work schedule, even human relationships—AI optimizes everything. Convenient and efficient. You no longer feel decision fatigue.

But there's a price.

The government requires all citizens to mandatorily submit experiences and thoughts "to maintain social systems and develop AI." Every morning, you report to AI "what you felt yesterday," "what decisions you made," "why you thought that way." This is labor. Unpaid labor. But based on what you submit, various benefits and disadvantages will occur.

Why? AI doesn't work without data. Your experiences are fuel making AI smarter. The emotions, choices, and concerns you submit accumulate, making AI more sophisticated. The quality of submitted data will vary. And treatment will differ based on that quality. Because that's efficiency. And who does that AI belong to? Not you. It belongs to the top tier that owns and controls AI.

You are a serf. You cultivate your experiences and thoughts and pay tribute to AI. They use that to build better AI and return an "optimized life" to you. You're not uncomfortable. You're not hungry. But you're not free. Because you don't own the means of production—AI.

Does this sound dystopian?

But we're already heading that direction. The photos you post on social media, the keywords you search, the ads you click, the emails you write—all become AI's training data. You receive no compensation. Platforms earn advertising revenue from that data.

Right now, there's choice. You can choose not to use social media, not to search. But in 2035, when AI becomes essential infrastructure? It becomes mandatory.


But If There's Balance

I'm not a pessimist. This worst-case future is the scenario when one side—Humanists or Transhumanists—goes to extremes.

If the two forces balance?

  • Humanists legally guarantee individual data ownership.
  • Transhumanists maximize AI benefits while maintaining transparency.
  • Humanists return algorithm control to citizens.
  • Transhumanists equally expand technology access.

Then we become information producers, not information serfs. AI becomes both tool and partner. Our abilities amplify without losing freedom.

But this won't happen automatically. We must choose.


Part 5: Our Choice, Now

Do you remember when smartphones first appeared?

People feared: "Kids will get addicted." "Face-to-face relationships will disappear." But they also hoped: "Information access will equalize." "Time will be saved."

The result? Both were right. Smartphones saved time. No need to go to banks, no need for paper maps, no need to stand on streets hailing taxis. But simultaneously we became addicted. We became unhappy comparing our lives to others' on social media.

Smartphones reduced time.

AI replaces me.

This is the key difference. Smartphones made what I did faster. AI does what I did instead. AI writes the report, not me. AI does the design, not me. AI even recommends thoughts, not me.

But here's an important fact:

Right now, we can still use AI.

Maor built an $80 million company with AI. You can too. Pieter Levels makes hundreds of thousands annually with AI. You can start too. Right now, AI is open to everyone. ChatGPT is free. Cursor costs $20/month. YouTube overflows with AI tutorials.

But this window of opportunity isn't eternal.

In 10 years, when AI becomes essential infrastructure? Access itself might be controlled. We don't pay to use Google now, but using AI might require "premium membership." Or impose "data provision obligations."

Now is the opportunity.


Life as a Serf Might Not Be Bad

Let me be honest.

Living as an information serf might not necessarily be unhappy. AI optimizes everything, you just provide experiences. Stress decreases, convenience maximizes. Maybe that's happier.

But I want to ask:

Do you want to choose that? Or are you accepting it because you have no choice?

Choice and resignation are different.


Becoming a Creator Now Is Wise

I've reached a conclusion.

I'm still not sure if I'm Humanist or Transhumanist. I'll probably oscillate between them my whole life. When AI amplifies my abilities, I become Transhumanist. When I see news of job extinction, I become Humanist.

But one thing I'm certain of:

Creating something with AI now is the wisest choice.

What you create doesn't matter. It could be a business, art, or knowledge. And quality doesn't matter either. Making something itself—the important thing is experiencing being AI's master right now.

In 10 years, using AI might require "permission." But those who start now are different. They know what AI is, how to control it, how to utilize it. They become producers, not serfs.


We Must Choose

To you who've read this far, I want to ask:

Which side are you?

Will you be cautious like Humanists, guarding humanity? Will you embrace AI like Transhumanists, transcending limits? Or will you wander between like me?

No choice is wrong. But not choosing will ultimately create regret.

Politics will change. Existing parties will choose either Humanist or Transhumanist to maintain power. Their fundamental framework won't change. It will still be a power game. But the life we experience will be completely different.

In the 2030 election, two candidates will face off. One will shout "AI Basic Income and Human First," the other will promise "AI Acceleration and Transcending Limits." Who will you vote for?

Before that, what will you do now?

Will you only fear AI? Or will you use it? Will you only criticize? Or will you create?

I turn on AI every morning. I'm still anxious. But I also create. I write, implement ideas, experiment. I don't want to become a serf. So now, I practice being a producer.

You can too.


October 29, 2025, I speak fearlessly.

Humanists and Transhumanists.

Five years from now, ten years from now, if these terms appear in news, in election promises, in everyday conversation.

Then we can say:

"We started this debate."

AI doesn't give answers. We just change the questions.

What is your question now?


If you've decided your position after reading this, let me know in the comments. "I'm a Humanist" or "I'm a Transhumanist" or "I'm still not sure." Your choices gathered together become the future.

Comments