The Invisible War of the AI Era: What Your Jeans Tell You

 


The Invisible War of the AI Era: What Your Jeans Tell You

Standing behind the convenience store counter serving customers, all kinds of thoughts cross my mind. This morning was one of those moments. I glanced in the mirror and noticed I was wearing jeans and a jersey top. It struck me—this isn't traditional Korean clothing. It's not a hanbok (Korean traditional dress), nor is it a durumagi (traditional overcoat). Even the way I greet customers with "Hello" when they enter feels closer to Western commercial etiquette. In the Joseon Dynasty, people would have said something different, perhaps more formal in a classical Korean style.

The more I thought about it, the stranger it seemed. We talk about respecting "individual freedom," yet we also emphasize traditional values like "filial piety" and "respect for elders." These two often clash. When young people want to choose a different life path from what their parents envision, conflict arises between "freedom" and "duty to family." At work, companies talk about Western-style "horizontal culture," but in reality, age and rank-based hierarchies still operate. Where did this contradictory situation come from?

Then a thought occurred to me. What's happening in the AI industry right now might be exactly like how I came to wear jeans. The values and ethics of AI that we take for granted when using ChatGPT or Claude—someone created those. And that "someone" is primarily America. A hundred years from now, when people use AI and consider "privacy protection" or "transparency" as self-evident, will those really be universal values? Or will they be values planted by America in the early 21st century?

When investors and analysts predict the future of AI, they usually only look at computing power, technology gaps, cost efficiency, and whether development is state-led or private-sector driven. These are important, of course. But I think something's missing. The question of "who captures the ethical framework first." History shows that values outlast technology. The swords of the Roman Empire have rusted, but Roman law still forms the foundation of European legal systems. The British Empire's warships are gone, but English and the British parliamentary system remain worldwide.

In this essay, I want to discuss why the capture of ethical frameworks and values matters when predicting AI's future. Just as jeans and T-shirts conquered our closets, AI ethics will fill someone's closet. And the confrontation between America and China in this process isn't just a technology race—it's a war to determine humanity's future value system. Let's explore together.


We've Already Been 'Conquered'

When did I start wearing jeans? Probably around middle school. Because all my friends wore them, because they looked cool, because they were comfortable. But how did they get to Korea? Jeans were originally work pants worn by miners during the 1850s American Gold Rush. Made by a company called Levi's, they were popular for their durability. Through 20th-century Hollywood films, they became symbols of "freedom and rebellion." James Dean wore them, Marlon Brando wore them. They spread worldwide with the 1960-70s hippie culture, and in Korea, they entered through US military bases and became fully established with the American culture boom of the 1980-90s.

Today, jeans aren't "American clothes." They're just "clothes." Universal. Worn everywhere in the world. But is this really universal? People who once wore Indian saris or traditional Arab clothing now wear jeans too. Is this natural evolution, or the victory of Western culture?

Etiquette is the same. In Korea, when we enter a meeting room, we start with "Hello" or its Korean equivalent. We shake hands. Is this Korean tradition? No. During the Joseon Dynasty, people greeted with bows (called "eup" or deeper bows). Handshakes came after contact with the West in the late 19th century. To look "professional" now, you must shake hands. Why? Because global business standards were set that way.

Go deeper and even our thinking has changed. We talk about "individual rights." Is this a traditional value of Confucian culture? No. Confucianism emphasized "duties within relationships"—between parent and child, ruler and subject, husband and wife. Individuals were defined within communities. But now we say "I am who I am." This comes from Western Enlightenment thinking, especially the individualism emphasized by philosophers like Kant and Locke.

What's interesting is when these values clash with Korean traditional values. When the university your parents want differs from the one you want, you're torn between "filial piety" and "personal choice." When you need to say "no" to an unreasonable directive from your boss, you're torn between "respecting elders" and "asserting legitimate rights." This is evidence of two value systems mixed together.

How did this 'conquest' happen? Through military force? No. That might have been true in the 19th-century colonial era, but modern Korea wasn't a colony in that sense. Then how? Through economic power, media, and education. We learned American-style romance from American movies. We learned American family values from American dramas. Learning English meant learning English-speaking ways of thinking. With university textbooks written in English, we accepted Western philosophy and economics as "correct answers."

This isn't to say it was bad. Many aspects were actually useful. Jeans are comfortable, handshakes are convenient, and individual rights are important. The problem is this became the "default" rather than a "choice." We've already internalized Western values without realizing it. And now the same thing is happening with AI.


AI Is Also Wearing 'Clothes'

These days I use AI when writing blog posts—tools like GPT, Claude, and Grok. They're convenient. But there are strange moments. For example, when I ask a question in Korean and suddenly get an answer in English. When I ask "Why are you answering in English?" it apologizes and switches back to Korean. What does this mean?

AI was designed in English. Over 80% of training data is in English. OpenAI, which made GPT-4, is an American company, and Anthropic, which made Claude, is also American. The data they learned from is primarily English-language websites, books, and papers. Within that lies Western values, ethics, and ways of thinking.

What values specifically? Let me give examples.

1. Privacy: Ask ChatGPT to "tell me someone's personal information" and it refuses. It says "privacy is important." Is this a universal value? In the West, especially America and Europe, yes. But in China? China's social credit system involves the state collecting and analyzing personal data to maintain social order. Is this "bad"? The Chinese government sees "community stability and safety" as more important than individual privacy. Which is right?

2. Transparency: Western AI ethics emphasizes "AI should be able to explain how it makes decisions." This is core to the EU AI Act. But China? Efficiency and results matter more. If AI predicts crime and makes society safer, does the entire process need to be disclosed?

3. Bias Prevention: Western AI says "there should be no discrimination based on race, gender, or religion." A good value. But this standard is also Western-centric. For example, Arab countries have traditional perspectives on gender roles. If AI classifies this as "bias"? Or if AI judges India's caste system as "discrimination"? Is this really "universal" ethics, or a projection of Western individualism?

What I've felt using AI is that AI already wears someone's values. Just like I'm wearing jeans. And most users don't realize this. They think "If AI says so, it must be right." Just as jeans became the "default clothes," AI's Western values are becoming the "default ethics."


America's Preemption Strategy

So how is America planting these values? There are three methods.

First, create technology standards first. The 2019 OECD "AI Principles" emphasize transparency, accountability, and human-centricity. Good principles. But who created them? Mainly American and European countries. How much input was there from China, India, or African nations? The 2023 update to the US NIST AI Risk Management Framework is the same. The principle of "managing risk without stifling innovation" reflects Silicon Valley's "Move Fast and Break Things" culture.

Second, overwhelm with data. AI learns from data. But over 80% of internet content is in English. Korean? About 1%. Chinese is similar. What does this mean? What AI learns as "normal" is primarily English-speaking culture. For example, ask AI to draw a "wedding" and you get a bride in a wedding dress. A bride in hanbok (Korean traditional wedding dress)? Rarely appears. Why? Because training data overwhelmingly contains wedding dress photos.

Third, dominate platforms. OpenAI, Google, Meta, Anthropic... all major AI companies are American. People worldwide use their AI. And naturally adopt American values. Just as Netflix spread American culture by distributing American dramas, AI platforms are exporting American ethics.

What's interesting is this process is identical to 19th-century British Empire or 20th-century American cultural exports. The difference? It's faster and more invisible. Jeans are visible. You can be conscious that "Oh, this is American." But AI's values? Invisible. When ChatGPT says "privacy is important," we just think "right," without recognizing it's an American value.


China's Counterattack

But the story doesn't end here. Because China isn't staying idle.

In July 2025, China announced the "Global AI Governance Action Plan." The core? The concept of "shared destiny." China argues "Western-centric AI ethics is unequal." They see state sovereignty and social stability as more important than privacy or transparency.

What are the specific differences?

1. Data Sovereignty: America believes "data should flow freely." Global internet, global cloud. But China believes "data should stay within the nation." Storing Chinese citizens' data on American servers is a security threat. Is this wrong? America tried to ban the Chinese app TikTok. Same logic.

2. Social Order First: China's social credit system is criticized in the West as "dystopian." But the Chinese government sees it as "a tool to increase social trust and reduce crime." Research shows it's effective. Individual freedom vs. social stability—which is more important? The answer varies by culture.

3. State-Led Development: In America, private companies lead AI development—companies like OpenAI and Google. In China? The state leads. Companies like Baidu and Alibaba exist but under government control. Which is more efficient? The American style innovates faster but is harder to control. The Chinese style controls easily but might innovate slower. But recently, with Chinese models like DeepSeek-R1 emerging, China is catching up fast.

China's strategy is clear: "We won't follow American ethics. We'll create our own AI ethics. And we'll bring developing countries to our side." Indeed, at the 2025 UN AI Advisory Body meeting, China's plan received support from many African, Middle Eastern, and Latin American countries. Why? These countries also resent "Western-centric ethics."


The AI Cold War Has Begun

What's happening now isn't just technology competition. It's a war of value systems.

In 2025, America announced "America's AI Action Plan," defining AI as a national security tool. Simultaneously, they strengthened semiconductor export controls to China, blocking sales of Nvidia's latest chips. The purpose? To slow China's AI development.

How did China respond? Self-reliance. Developing models like DeepSeek, showing "we can do it without American chips." DeepSeek-R1 actually approached GPT-4 in performance, at much lower cost.

This confrontation is now dividing the world into two camps.

American Camp (Liberal Block): US, EU, Japan, South Korea, Australia, etc. Core values: individual freedom, transparency, private-sector leadership. Trying to export AI standards through the QUAD alliance (US-Japan-Australia-India).

Chinese Camp (State-First Block): China, Russia, and many developing countries. Core values: state sovereignty, social stability, state leadership. Like the "Belt and Road" strategy, expanding influence through technology support and investment.

What's interesting are the neutral countries. Nations like India, Brazil, and Indonesia pursue "sovereign AI," saying "we'll make our own AI." A third way, neither American nor Chinese. But the technology gap is large. They'll likely need to borrow technology from America or China.

This cold war evokes the past US-Soviet Cold War. That was also an ideological confrontation—capitalism vs. communism. Now? Liberal AI vs. authoritarian AI. The difference? AI models are the weapons instead of nuclear weapons. And what's scarier—AI penetrates deep into daily life. You can choose not to use nuclear weapons, but you use AI every day. Search, recommendations, translation, healthcare, finance... AI is everywhere. And what values that AI holds determines our lives.


What Investors Are Missing

Now, here's the core of what I want to say. When investors and analysts predict AI's future, they can't just look at technology and cost. They must also look at ethical framework capture.

For example, many people say "America will win in AI." Why? Because computing power is strong, technology is advanced, and there are giants like OpenAI and Google. That's true. But they're missing something. China is capturing developing country markets.

Of the world's 8 billion people, developed nations like the US-EU-Japan account for about 1 billion. The other 7 billion? In Asia, Africa, and Latin America. Which AI will these countries choose? American AI is expensive, heavily regulated, and complicated to use. Chinese AI? Cheap, less regulated, and can receive government support. Plus China says "we don't interfere like the West." Attractive, isn't it?

Another example: language. AI learns from data. Currently 80% is English data, so America has an advantage. But going forward? Internet users are growing mainly in Asia and Africa. Their languages aren't English—Chinese, Hindi, Arabic, Swahili... As data in these languages increases, America's English advantage weakens. And China is already preparing, developing Mandarin-centric AI models and cooperating with Asian-African nations to collect data.

From an investment perspective? Short-term, America wins. But long-term is uncertain. Investing in Nvidia, OpenAI, Google seems safe. But in 10 years? If China becomes self-sufficient in semiconductors, narrows the performance gap with models like DeepSeek, and captures developing country markets? The landscape could change.

And more importantly, the same applies to industries beyond AI. Electric vehicles, batteries, semiconductors, biotech... in all technology industries, capturing "ethics" and "standards" matters. For example, EV charging standards—America uses CCS, China uses GB/T. Which becomes the global standard? This determines market share. Same with batteries—environmental regulations, safety standards... who sets the standards determines who dominates the market.


What Should We Do?

So what should ordinary people like us do? I have three suggestions.

First, be conscious. When using AI, think about "who made this and what values does it hold?" When ChatGPT says "privacy is important," ask "Is this really a universal value, or an American value?" I'm not saying reject it. Just accept it consciously. Like wearing jeans while knowing "this came from America."

Second, seek diversity. I use GPT, Claude, and Grok. Why? Because each is different. GPT is accurate, Claude is kind, Grok is honest. Using only one traps you in that tool's perspective. Using multiple lets you see more broadly. I plan to try Chinese AI too—DeepSeek or Baidu AI. To learn different perspectives.

Third, reflect it in predictions. Whether investing, doing business, or planning a career, don't just look at "technology"—look at "values" too. Which markets does a company target, what ethics does it hold, what standards is it pushing? This determines long-term success. For example, if I invest in an AI startup, I shouldn't just look at technical performance but also "is this company tailored to the American market or the Asian market?" The former has fast short-term growth but fierce competition; the latter has slower growth but a larger market.


Closing: Jeans and AI

Back to the convenience store counter. Today I'm wearing jeans to greet customers. Now these jeans look different to me. They're not just clothes. They contain 150 years of cultural transmission, value shifts, and the formation of global standards.

AI is the same. ChatGPT isn't just a tool. It contains American individualism, transparency, and innovation-first principles. DeepSeek contains Chinese collectivism, state sovereignty, and efficiency-first principles. Which tool we use determines which values we accept.

A hundred years from now when people use AI, what will they take for granted? Privacy? Social stability? Innovation? Control? That depends on what we choose now. And that choice is not just about technology—it's also about values.

I'm an ordinary person who works at a convenience store and writes a blog. I'm not an AI expert or a philosopher. But wearing jeans makes me think about these things. The things we accept thoughtlessly—what history and power are hidden within them. And how the same thing is happening now in the AI era.

Whether you're investing, studying, or just using AI, think about it once. Who made this tool I'm using, and what values does it contain? And in 10, 20 years, how will these tools change the world?

Like jeans, AI will eventually become part of our lives. The question is whether we choose consciously or accept unconsciously. I want to choose consciously. What about you?


Closing Note: This essay is not investment advice but a personal observation and reflection. I wanted to offer one perspective to those interested in the AI industry or technology investment. What are your thoughts?

Comments