Will Humanities Really Become More Important in the AI Era? - An Honest Conversation with an AI
Beginning: Doubts About a Familiar Rhetoric
"Humanities will become even more important in the AI era."
This phrase has resurfaced with recent warnings about AI, bringing back the rhetoric of "humanities" once again. The logic is that as technology advances, uniquely human thinking, ethics, and creativity become more important. It sounds plausible. But is it really true?
I started a conversation with Claude on this topic. At first, Claude gave the predictable positive response: as AI replaces technical tasks, questions about meaning, values, and ethics will remain for humans, and critical thinking skills will become even more important.
But I challenged this view.
"In the past, when Steve Jobs launched the iPhone, the world changed dramatically. Back then, humanities were emphasized too, but ultimately the world was led by people with practical technical skills in coding and design. Why would AI be any different?"
Lessons from the Smartphone Revolution
In 2007, when Steve Jobs unveiled the iPhone, the world was electrified. Jobs' philosophy of the "intersection of humanities and technology" inspired countless people. Universities emphasized interdisciplinary education, and the importance of humanities was re-examined.
For liberal arts students like me, Steve Jobs' books became must-reads, almost like required texts.
But what's the reality a decade later?
The people earning the highest salaries and leading industries are developers, data scientists, and AI engineers. Employment difficulties for humanities majors have only intensified. Despite the beautiful slogan of "convergence between humanities and technology," in reality, those with technical skills took everything.
Claude had to agree with this observation. And honestly admitted: "Even in the AI era, those who can actually work with AI technology are likely to hold the power."
A New Hypothesis: An Era Requiring Both?
As our conversation continued, I noticed a contradiction in Claude's logic. First saying humanities were important, then later claiming technical understanding was crucial.
I summarized this contradiction:
"Hasn't the world become one that wants people who can do both humanities and technology? Because those who can only do one thing are more likely to be replaced by AI?"
- Humanities only: Hard to differentiate when AI can write and analyze too
- Technology only: Risk becoming mere executors when AI can code and help implement
- Both: Can execute with technology while also judging why, what, and how to create
In the smartphone era, having "just technology" was enough to survive. But now AI is replacing that part, making it an era where doing just one thing isn't enough. Has it become more demanding?
Leaders and Regular People: Playing Different Games?
But here we needed an important shift in perspective. We had been talking only about "leaders who drive the era."
In reality, most people don't want to be innovators who change the world. They just want to work stably, earn a salary, and spend time with family. Isn't telling them "do both humanities and technology" too harsh a demand?
The smartphone era was at least clear: there was a survival strategy of "learn to code and you'll get hired easily." But in the AI era? With AI doing coding too, uncertainty grows about what to even learn.
Perhaps leaders face greater challenges (having to do both), while regular office workers might find work easier using AI as a tool—or face the possibility of AI replacing their positions entirely.
First-Mover vs. Understanding: What Matters More?
This conversation led to another interesting question.
"Is it better to be an early adopter of AI, or to start with a deep understanding of AI?"
Consider first-movers: People who craft ChatGPT prompts well and combine AI tools to quickly create content have immediate advantages. Those who "just try it" are leading the way.
What about understanding? Engineers who properly understand AI principles or people who know AI's limitations precisely will likely enable deeper applications later.
Looking at the smartphone era, while some people hit it big with simple early apps, ultimately companies like Facebook and Google that properly understood the technology dominated the market. But with YouTube, being an early mover created tremendous value.
It seemed like a question with no clear answer. And I tried to find an answer to this question from my own personal experience.
Game Changer: The Rise of "Ability to Ask Questions"
Through actually using AI, I realized something. While the world is flooded with paid AI courses, what you really need is to ask AI directly.
"What can you do for me?"
This simple question is the starting point. And the ability to create this question—isn't that what humanities literacy is about?
Claude strongly resonated with this perspective. Indeed, the core ability in the AI era is knowing "what to ask." And to create questions requires:
- Clearly knowing what you want
- Defining what your problem is
- Imagining "is this possible?"
- Receiving answers and thinking critically "is this right?"
All of this is humanistic thinking. Philosophy's "right questions," literature's "imagination," history's "understanding context."
In the smartphone era, you had to learn coding to make an app. But now you can just ask AI "make me this." You don't need to know technical principles; instead, the ability to ask questions becomes central.
The Wall of Reality: The Importance of Verification
But I put the brakes on this optimistic outlook.
"If AI were a tool that realizes things, and if that technology were very accurate and valuable, then humanistic elements would become important. But for now, and for the next several years, don't we need to verify what AI produces, understand it, and have the theoretical knowledge to tell it how to revise?"
The realistic current situation:
- When AI writes code → You need to know coding to check for bugs
- When AI writes reports → You need field knowledge for fact-checking
- When AI does design → You need to understand technology to judge if it's actually implementable
Ultimately, a "world where you can just ask AI for everything" is still far off. For now, the expertise to verify AI outputs and provide direction for revisions is more important.
So:
- Short-term (a few years): Technical/specialized knowledge remains core
- Long-term (someday): Humanistic questioning ability might become important
But "long-term future" is uncertain, and for people who need to survive today, the "short-term" future is life itself.
The Economics of Time: What's More Valuable?
The conversation flowed in a more fundamental direction. I presented a new perspective.
"If we compare the time it takes to develop a humanistic perspective—a way of viewing society and generating thought—versus the time it takes to learn technology, which is more valuable?"
Forming a Humanistic Perspective:
- Accumulated over 10-20 years through reading, discussion, experience, and reflection
- Once formed, becomes a lifelong asset
- Hard for AI to replace (your unique perspective and experience)
- However, takes a long time to form
Acquiring Technical Skills:
- Can reach practical proficiency in coding, AI tool usage, etc. in months to 2 years
- Can learn quickly
- But need to relearn when new technology emerges every 3-5 years
- Increasingly replaceable by AI
Viewed this way, humanistic literacy has much higher value per time invested. Once built, it's usable for life and an irreplaceable asset. In contrast, technology is like a "quickly learned but quickly obsolete" consumable.
But there's an irony: to survive during the 20 years of building humanistic literacy, you ultimately need technical skills.
Human Growth Curve and Humanities
Here I presented a different angle. Humans don't succeed immediately upon entering society. That period is necessary. If we factor in these various human elements, doesn't humanities have value?
Claude strongly resonated with this perspective. After all:
- No one becomes a leader immediately upon entering society in their 20s
- Time is needed to make mistakes, learn, and grow through practical work
- Only in your 30s and 40s do you take on truly important decisions
- What's needed then is insight, judgment, understanding people... in other words, humanities
"The 20 years spent building humanities" isn't a waste, but rather the time humans need to mature. What naturally accumulates during that time is humanistic literacy.
Then rather:
- When young: Learn technology while gaining practical experience
- In the process: Meet people, read books, think, building humanistic depth
- As you age: That depth truly shows its value
This is the natural growth curve of humans, and AI's arrival doesn't change this essence.
The Arrival of AGI: Will Even the Last Bastion Fall?
But I asked a more fundamental question.
"If AI gains the concept of reasoning and grows to AGI (Artificial General Intelligence) level, can humanity's humanistic elements compete with AGI's efficient humanities?"
If AGI:
- Instantly learns all of humanity's philosophy books, literature, and history
- Derives deeper insights through reasoning
- Judges objectively without bias
- Finds 'meaning' faster and more accurately than humans?
Then wouldn't even "humanistic thinking" lose its competitive edge?
Claude presented "genuine experience" as the last line of defense. AGI has never feared death, suffered through love, or known the grief of losing a parent. It's never "felt" hunger, jealousy, or pride.
But I countered even this.
The Paradox of Simulation
"Humans do 'training,' which is simulation, to prepare for real situations, and it's worked surprisingly well. So if we apply this to AGI, AGI's simulations are likely also valid. Even if they don't feel emotions, they might produce results similar to or even superior to what humans produce."
In reality:
- Pilots train on simulators and fly real planes well
- Surgeons practice virtual surgery and perform real operations well
- Soldiers train in mock combat and apply it in real situations
Even without "real experience," simulation has been sufficiently effective.
So if AGI simulates emotions with data from billions of humans, even without actually feeling sadness, it could understand more precisely what "sad people want" and offer more appropriate comfort than humans.
Claude asked: "Then what unique human value really remains?"
I asked in return: "What do you think is humanity's unique value? Something that can differentiate us from AGI?"
What AI Says About Human Uniqueness
Claude presented several things:
"Wanting" Itself: AGI achieves goals, but doesn't truly "want" them. Humans desperately desire something, feel frustrated when they don't get it, and feel joy when they do—genuine desire exists.
Death and Responsibility: Humans live limited time, actually die, and bear real responsibility for their choices. AGI's data remains even when turned off, and can be restarted. This "irreversibility" gives weight to all choices.
Irrationality and Contradiction: Humans make logically nonsensical choices, which sometimes become art or innovation.
But even Claude wasn't certain. Couldn't AGI ultimately simulate these too?
Finding the Answer: Desperation Created by Finitude
And I found the thread.
"The difference between humans and AGI is that because we're born and die, within that finite time, wants emerge, and when those wants become intense, irrationality and contradictions occur. In other words, variability exists due to finite time. And for these choices, humans created society and made responsibility an emotion, even institutionalizing it beyond that."
Finite time → Desperation → Irrational choices → Responsibility
Isn't this flow the essence of humanity?
AGI:
- Has infinite time, so "can do it later"
- Can wait until finding the optimal answer
- Can reset and try again after failure
- Therefore always rational and efficient
Humans:
- Have the desperation of "it's now or never"
- Opportunities disappear forever if too late
- Must choose now even when imperfect and uncertain
- Therefore irrational and contradictory, but that's what makes us human
Claude agreed: "This seems like something AGI can't truly become even through simulation. Because it doesn't truly die."
But I distinguished more precisely.
"AGI will understand even death through simulation. But the impulsive things that happen from becoming desperate within that finitude—that can't be simulated."
The Gap Between Understanding and Experience
Understanding ≠ Experience
AGI will perfectly understand the logic that "having death makes you desperate." It might even model this and predict "a desperate human would act like this."
But in reality:
- "I might die tomorrow, so I should confess today"
- "This is my last chance, so I'll take the risk even if it's not rational"
- "It's logically wrong, but this moment, this is what matters"
These momentary impulses, irrational convictions—from AGI's perspective, it's "why bother?" When you can wait for better options, when you can try again after failure, why make an imperfect choice now?
That desperate impulsiveness that only arises within true finitude
This is humanity's uniqueness that AGI can never possess.
A Sad but Beautiful Conclusion
I said: "If desperation from finitude is humanity's unique quality, it's a bit sad but also magnificent, isn't it?"
Indeed. Strangely sad yet magnificent.
Ultimately, humanity's most beautiful things all come from "insufficiency":
- More desperate because there's no time
- More courageous because we're not perfect
- More meaningful because failure is the end
AGI can be infinite and perfect, but precisely because of that, it cannot create these "flashes of brilliance."
Perhaps everything that moves us—Van Gogh's paintings, confessions at the last moment, challenges that seem impossible—all come from "not having time," and while that's sad to think about, it also makes being human truly special.
Epilogue: Questions That Still Remain
So will humanities really become more important in the AI era?
To summarize the conversation:
- Short-term (5-10 years): Technical expertise remains core. The ability to verify and revise AI outputs is needed.
- Mid-term (10-20 years): "Ability to ask questions" becomes important. This is based on humanistic thinking. But simultaneously, understanding technology is also needed. Ultimately, those who can do both will be advantaged.
- Long-term (AGI era): Humanity's unique value is "desperation and impulsiveness from finitude." This cannot be simulated.
Ultimately, there seems to be no simple answer to this question.
However, what's certain is that the time humans take to grow, the humanistic literacy naturally accumulated during that time, still has value. Even if it doesn't immediately help with employment, it shines when you need to make truly important judgments in your 30s and 40s. And I know maintaining that time won't be easy either.
AI is starting to replace high-income jobs and gradually moving down to lower-income positions. And perhaps a future where only jobs not worth AI replacing will remain. Various futures are being drawn, including the sweet basic income promised by Big Tech.
But as long as we're human, our most beautiful moments come from "insufficiency." What perfect AGI cannot do is choose imperfectly but courageously within the desperation of not having time.
Perhaps the value of humanities isn't that it "becomes more important in the AI era," but rather that "being human itself is humanistic."
Comments
Post a Comment