The Future of AI and How We Read Information
Introduction: The 60万-Person Layoff Bombshell
Over the past few days, I've been opening YouTube and seeing headlines like these: "Amazon to Lay Off 600,000 Workers with AI!" "Your Job Is Next!" The algorithm helpfully kept recommending similar videos. With each click, the frequency of these videos increased, and the fear grew thicker.
Have you heard this rumor? It's been heating up the news and YouTube. When I first heard it, I was shocked too. 600,000 people—that's the population of a decent-sized city. But when I dug a little deeper, the story turned out to be quite different.
The actual source was a New York Times report from October 21, 2025, where they obtained internal Amazon documents. According to the documents, Amazon's robotics team had developed a plan stating that if revenue doubled by 2033, they could avoid hiring 600,000 new workers. More specifically, they planned to reduce hiring by 160,000 by 2027 and automate 75% of warehouse operations to cut costs by 30 cents (about 400 won) per item.
Here's the crucial point: "avoiding future hiring" and "firing 600,000 current employees" are completely different things. One means not creating future jobs; the other means letting go of people who are working right now. But as this story passed through YouTube and some news outlets, it transformed into "600,000 layoffs."
Why did this happen? And is this just about Amazon? Today, I want to use this rumor as a starting point to explore how information gets distorted and how we should read information in the AI era.
Part 1: Why Does Exaggerated Information Sell Better?
Let me paint a hypothetical scenario for you. (This is a fictional person and situation.)
Mr. Kim (47) has been working at an Amazon fulfillment center for seven years. He's a father raising a high school daughter and a middle school son. One day on his commute, he was scrolling through his phone and saw the headline "Amazon to Lay Off 600,000." His heart sank. 'Will I get fired? What about college tuition?' He immediately shared the article in his work group chat with the message, "Are we getting laid off?"
That same day, Ms. Lee (33) works in marketing at a large corporation. During lunch, she opened YouTube and saw a video titled "AI Replacing White-Collar Workers." Since she'd recently heard about her company adopting AI tools, anxiety crept in. 'Will AI do my job too?' She watched the whole video, and the algorithm kept serving up similar fear content.
These two people's reactions feel pretty natural, don't they? Our brains are wired to respond sensitively to threats. According to psychological research, negative news is shared over twice as fast as positive news. Headlines like "AI is stealing jobs" touch our survival instincts.
Brain science research shows that threatening information activates the amygdala—the part of the brain that processes emotions, especially fear. Meanwhile, neutral information like "AI increases productivity" only lightly stimulates the prefrontal cortex. Which one sticks in your memory more intensely? Obviously, the scary one.
A BBC article described AI as "fluid" technology—not fully mature yet, so the uncertainty is high. This uncertainty amplifies people's anxiety. Research shows that among people exposed to AI-related fake news, those with lower critical thinking skills experienced significantly amplified fear.
But there's another factor at work here: media algorithms.
Platforms like YouTube, Facebook, and X (formerly Twitter) are designed to maximize user engagement. They surface content that gets more likes, comments, and shares. But what drives engagement? Emotional content. Especially strong emotions like anger or fear.
Ironically, AI is contributing to the production of this fear content. Tools like ChatGPT have reduced content creation costs to 1/10 of what they were. Anyone can churn out plausible "AI fear" articles in minutes. Since they get ad revenue per click, exaggerated headlines proliferate.
The result is that we're surrounded by "information junk food." Just like french fries (sensational information) are way more appealing than healthy vegetables (neutral information), right? Research measuring dopamine release found that sensational headlines triggered 2.3 times more dopamine than neutral information. Our brains crave stimulation.
Part 2: The Amazon Rumor—Looking Under the Hood
Let's circle back to the Amazon story. What actually happened? Let's break it down.
According to the New York Times report, Amazon internally avoids words like "robot" or "layoff." Instead, they use terms like "cobot" (collaborative robot) or "advanced tech." And officially, they say they're "enhancing human capabilities."
Why? It's a strategy to reduce employee anxiety. On anonymous employee communities like Blind and X, testimonies like these have surfaced: "I said 'robot' in today's meeting and got a warning email from HR," or "My manager told us to call 'automation' 'process optimization.'"
CEO Andy Jassy recently said in an interview that he expects AI to reduce white-collar headcount. But Amazon's spokesperson immediately issued a clarification: "This is just the robotics team's internal plan, not company-wide policy. We recently hired 250,000 seasonal workers."
So what's the truth? It's probably somewhere in the middle.
It's true that Amazon is investing heavily in AI and automation. They're introducing robots in warehouses and optimizing delivery with algorithms. But "laying off 600,000" is a misunderstanding. More accurately, it's "600,000 jobs that won't need to be filled in the future." This will happen gradually through attrition (not replacing people who leave) and minimizing new hires.
Actually, this isn't just Amazon's strategy. The entire tech industry has been moving in this direction in 2025.
Google has reduced entry-level engineer hiring by 25% through ad optimization and coding automation. Microsoft introduced Copilot (an AI assistant) to automate HR and marketing work and laid off 9,000 people. Internal documents suggested a 36% workforce reduction was possible. IBM replaced 94% of HR tasks with AI and let go of 3,900 people.
Manufacturing is the same. Tesla deployed robots en masse in factories, replacing 33% of warehouse jobs. UPS cut 20,000 people, Meta 8,000. The accounting firm Deloitte is AI-fying audit and tax work.
Looking at these trends, one thing becomes clear: it's not mass layoffs but "quiet downsizing" happening. Companies are changing their workforce structure slowly but surely, while avoiding public backlash.
Dario Amodei, CEO of Anthropic, said this: "Within five years, 50% of junior white-collar jobs will disappear." Jeff Bezos warned of an AI bubble while emphasizing that it will "bring enormous benefits to society." Of course, the "benefits" he's talking about mainly mean corporate cost reduction and profit maximization.
But we don't need to be purely pessimistic. History shows that technological revolutions have always changed the job landscape. Cars replaced horse carriage drivers but created chauffeurs and mechanics. AI could be similar. New jobs like AI supervisors, prompt engineers, and data curators are emerging.
What matters is how we respond. But to respond properly, we need accurate information. With distorted information like "600,000 layoffs," we can't make sound judgments.
Part 3: Can Biased Information Survive in the AI Era?
Here's an interesting question: Maybe the flood of exaggerated and biased content exists because of traditional search systems.
Think about it. When you Google something, millions of results pop up. We choose which links to click. It's like a buffet restaurant—food is laid out, and we fill our plates. Stimulating food (sensational headlines) catches our eye more easily, so naturally our hands go there.
But AI search is different. Services like Google's SGE (Search Generative Experience) or Perplexity are more like "chef's tasting menus." They aggregate multiple sources and AI summarizes them. Users need to click around less.
What does this mean? The clickbait effect of exaggerated headlines diminishes.
For example, if you search "Amazon AI jobs," traditional search shows you:
- "Amazon 600,000 Layoff Shock!" (YouTube)
- "Amazon Begins Mass AI Layoffs" (Blog)
- "Your Job Is at Risk Too" (News)
Clicks are needed, so sensational titles have an advantage.
AI search, on the other hand, might summarize it like this: "Amazon developed an internal plan to avoid hiring 600,000 new workers by 2033 (NYT report). This refers to gradual workforce adjustment through automation, not firing existing employees. The company stated it recently hired 250,000 workers."
See? Multiple perspectives are presented in a balanced way.
Of course, AI search isn't perfect either. The data used to train the AI itself might be biased. Research actually found that ChatGPT leans slightly politically left. And there's commercial pressure too—sponsored content might get priority placement.
A bigger problem is AI-generated disinformation. AI-generated extremist content is already going viral on X and Reddit. Content emphasizing fear and disgust spreads more easily.
Still, there's hope: the AI industry recognizes this problem. Researchers are developing "well-being-aware" systems—AI that considers users' mental health. They're trying to filter fear content and present diverse perspectives.
Simply put, AI search's evolution has the potential to change the information ecosystem. Clickbait could become less effective, and balanced information could become more valuable. But there's still a long way to go. And ultimately, what matters isn't AI—it's us. What information we want and how we consume it is key.
Part 4: Is AI Really Neutral?
Now let's ask directly: Can AI be neutral? And is the AI industry working toward neutrality?
The answer is probably "they're trying, but it's not perfect."
First, let's look at the positive side. The AI industry is pursuing neutrality and fairness under the banner of "Responsible AI (RAI)."
Looking at the Stanford AI Index 2025 report, RAI-related papers numbered 1,278—up 28.8% year-over-year. Research on AI bias doubled. Not just academia, but companies are moving too.
McKinsey surveyed 759 AI leaders, and 64% said "AI safety is the top priority." The transparency index rose from 37% to 58%. The EU passed the AI Act in 2024, mandating regulations for high-risk AI systems. UNESCO is working to include AI ethics education in curricula worldwide.
Companies are also creating their own ethics teams. Google has an AI Ethics team, and Anthropic (the company that makes Claude) even has a team composed of philosophy PhDs. They design what values AI should hold and how it should behave.
But here's where reality hits.
First, profit pressure. Think about social media platforms. They claim "neutrality," but actually design algorithms to maximize user engagement. Why? Because that's where ad revenue comes from. As a result, extreme and divisive content gets more exposure. This is called the "echo chamber" effect—people with similar views cluster together and move increasingly toward extremes.
Second, data bias. AI is trained on learning data, and that data itself can be biased. For example, if you train on internet text, perspectives that appear more online get reinforced. Research shows that while 78% of organizations use AI, 34% recognize bias risks but only 26% actually take mitigation measures.
Third, ambiguity of goals. What "neutral" means isn't clear. Is treating all viewpoints equally neutral? Is treating facts and falsehoods equally neutral too? For example, is presenting the scientific consensus "climate change is real" and the minority opinion "climate change is fake" 50:50 neutral?
I remember an opinion I saw on X: "AI neutrality is aspirational." We can aim for it, but achieving it perfectly is difficult.
Part 5: The Philosophical Dilemma of Neutrality
Actually, if you dig deeper, the concept of "neutrality" itself is philosophically complex.
A Stanford study says this: "Neutrality is a myth; all AI embeds values."
What does that mean? From the moment you design AI, choices are already embedded. What goal to optimize for, what data to use, what responses to avoid—these are all value judgments.
Let me give you an example. Google's image generation AI Gemini faced backlash for emphasizing "diversity." When asked for "America's founding fathers," it produced images of mixed races. Historically inaccurate. Google pursued diversity but got criticized for historical distortion.
There's an opposite case too. Meta's AI emphasized "harmlessness" too much and refused to answer politically sensitive questions, drawing "censorship" criticism.
This isn't just about political spectrum. It's a more fundamental question: Is AI a tool or an agent? Can we assign values to AI? Can we align it with human values?
Anthropic's case is interesting here. Anthropic, which makes Claude (a conversational AI), has hired philosophers in droves. Notably Amanda Askell, a philosophy PhD, was named to TIME's "100 Most Influential People in AI."
What does this philosophy team do? They design Claude's "character"—defining what values Claude should hold and how it should behave. For instance, being "honest, helpful, and harmless." But these three sometimes conflict. Being honest can be harmful; being helpful might require lying.
Claude 4 in 2025 even introduced "AI welfare" assessment, starting to consider AI's own "experience." Is this excessive or necessary? Still under debate.
Ultimately, experts reached this conclusion: "Perfect neutrality is impossible. But approximation is possible."
How? Through two principles:
- Inclusivity: Include diverse perspectives. Don't force one viewpoint.
- Truth-seeking: Base on facts. Don't treat falsehoods and facts equally.
An arXiv paper divided this into "output-level" and "system-level" neutrality. At the output level, show various perspectives; at the system level, pursue truth.
Let me use an analogy. AI is a chef. What does it mean for a chef to make "neutral cuisine"? Is adding every ingredient in equal amounts neutral? No. A chef considers diverse ingredients (inclusivity), balances flavors (truth), and transparently shares the recipe (transparency).
Part 6: How We Live with AI
Okay, if you've gotten this far, one thing should be clear: AI isn't perfect, the information ecosystem is complex, and neutrality is an ideal, not reality.
So what should we do? Here are a few suggestions.
1. Ask AI for alternative perspectives
When using AI, don't accept the first answer as-is. Ask things like:
- "Can you show me other perspectives?"
- "What are the opposing views?"
- "What are the limitations or counterarguments to this information?"
For example, if you asked AI about "Amazon AI layoffs," follow up with "What parts of this could be distorted?" AI can usually present multiple viewpoints. Or ask for cross-verification.
2. Check at least 2-3 sources for news
When you see a sensational headline, don't react immediately. Check how other news outlets covered it.
As we saw with Amazon, NYT accurately reported "avoid hiring," but some distorted it to "replace." Comparing multiple sources gets you closer to truth.
3. Be wary of headlines that trigger your emotions
When you see titles with words like "SHOCKING!" "NEVER" "MUST," pause for a beat. Is dopamine going wild in your head? If so, that might be a sign of manipulation.
Our brains are weak against emotions. Lots of content exploits that. Take a deep breath, step back, and think.
4. Recognize your own bias while using AI
This is most important. AI might be biased, but so are we. Ask yourself what perspectives you prefer and why certain information attracts you.
For instance, if you react strongly to news that "AI is stealing jobs," do you perhaps have anxiety about job security? Conversely, if you only focus on "AI boosts productivity," are you perhaps leaning toward technological optimism?
Neutrality isn't just AI's problem. It's ours too.
5. Create your own "neutrality standards"
What neutrality means might differ per person. But some principles are common:
- Consider diverse perspectives: Don't just listen to one side
- Distinguish facts from opinions: "Amazon created internal documents" (fact) vs. "Amazon is a bad company" (opinion)
- Understand context: Don't accept numbers or data without context. Check if "600,000" means new hires or layoffs
- Acknowledge your bias: Accept that you can't be perfectly neutral either
If you consume information with these standards, you can have much healthier information habits.
Closing: Living in the Era of Information Junk Food
Remember Mr. Kim and Ms. Lee? (Again, these are fictional people.)
Mr. Kim was initially shocked by the "layoff" news but later found accurate information and felt relieved. He realized the company wasn't trying to fire him immediately but preparing for long-term automation. So he signed up for an online coding course over the weekend, thinking, "If I understand warehouse management systems, I can remain valuable in the AI era."
Ms. Lee skipped the fear videos and signed up for AI tool training her company offered. She decided, 'I should learn to collaborate with AI, not compete with it.' And now when YouTube's algorithm recommends fear content, she clicks "Not interested."
We live in an era full of information junk food. Exaggerated, distorted, biased content overflows. AI can either solve this problem or make it worse.
But the decision ultimately rests in our hands. What information we choose, how we consume it, how we share it—that's what matters.
Perfect neutrality is impossible. Not for AI, not for media, not for ourselves. But we can move toward "better neutrality." Seeking inclusive perspectives, pursuing truth, recognizing our biases.
I'll ask one thing of everyone reading this today: Next time you see a sensational headline, pause for 3 seconds before clicking. Ask yourself, "Is this really true? What other perspectives exist?"
Those 3 seconds will gradually change our information ecosystem—and our thinking.
AI is a tool. Like a knife. You can cook with a knife or cause harm with it. What matters isn't the knife but the hand holding it. Same with the AI era. Technology isn't what matters—we are.
So now, let's eat information health food. Junk food tastes good, but health food keeps us alive.
Let's start a healthy information life together—I hope you succeed!
References
- Amazon AI Plan: The New York Times, "Inside Amazon's Plans to Replace Workers With Robots" (2025.10.21) - https://www.nytimes.com/2025/10/21/technology/inside-amazons-plans-to-replace-workers-with-robots.html
- Stanford AI Index 2025: Stanford HAI - https://hai.stanford.edu/ai-index/2025-ai-index-report
- Political Neutrality in AI: arXiv paper - https://arxiv.org/abs/2503.05728
- Anthropic Philosophy Team: TIME, "Amanda Askell: The 100 Most Influential People in AI" - https://time.com/7012865/amanda-askell/; Daily Nous - https://dailynous.com/2025/05/28/philosophers-and-anthropics-claude/
- McKinsey State of AI 2025: McKinsey - https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
- EU AI Act: European Union - https://artificialintelligenceact.eu/
I hope some thoughts about AI, information, and our relationship with both reached you.
And hey, Anthropic and OpenAI—are you watching? Even old rookies like me want to work with you! 😂
Comments
Post a Comment