Don't Stand in the Narrow Alley — How to Survive the AI Era as an Individual

 Don't Stand in the Narrow Alley — How to Survive the AI Era as an Individual

Eternal Flower



These days, most discussions about AI end with the same conclusion. "Stand together." "Fight back as one." "Unite to protect human values." It sounds good. But I do not agree.


I am a person who writes an English martial arts web novel using AI. I have written a long series of 605 chapters with the help of AI. I translate with AI. I make videos with AI and upload them every day. I benefit from AI, and at the same time, I could be replaced by it. Because I stand on both sides, I can feel where each argument holds up and where it falls apart. This essay begins from that feeling.


Most arguments about the AI era start with data. There is growing anger that AI companies used human data without permission to train their models. One million hours of YouTube videos. Seven million digital books. The numbers are enough to make anyone upset. But we need to step back and ask a simple question. Who made that data valuable in the first place? The creators who uploaded videos to YouTube did not pay for servers. They did not do their own marketing. They used a system where algorithms brought viewers to them, all for free. Their content became worth something because it sat on top of a platform that handled everything. If we think back to the early days of the internet, this becomes even clearer. People used to build their own websites, rent their own servers, and find their own visitors. They fully owned their data, but they could hardly make any money from it. When people moved to YouTube and Instagram, no one forced them. They chose to move because the expected income was bigger and the work was easier. They gave up some control over their data in exchange for convenience and profit. To choose a platform for those reasons and then get angry when that platform uses the data — there is a contradiction in that.


Of course, this does not mean that consent opens the door to anything. The fact that OpenAI scraped YouTube without permission and that Google looked the other way is real, and the loose rules between companies are a problem. But the boundaries of data use between corporations belong to the world of business law and copyright law, not to the anger of individuals. What we need is not outrage but transparency. Which data was used for training? What does the profit structure look like? How is bias being managed? These things should be made public by law. The food industry already proved this model works. When governments forced companies to show ingredients and where their products came from, consumers could finally make informed choices. The same logic applies to AI. If governments are too slow, independent organizations can step in and do the watching. Changing the structure is more effective than shouting in the street. The realistic direction is not "don't use it" but "use it, and build a fair system around it."


At the end of every data debate, there is always the shadow of AGI. It makes business sense that AI companies want to complete a learning loop using data made by AI itself. But the story that "once AGI arrives, humans will be thrown away" is an exaggeration. When AI learns from data that AI created, the model breaks down. This is not a theory. It has been shown again and again in experiments. Human reactions and human empathy are things that AI cannot produce on its own. However, the real question is not about technology. It is about rights. If AGI could one day own a bank account, sign contracts, and act as an independent economic player, then the weight of human data might shrink. But history tells us that humans have almost never given away the privileges of their own kind. It took hundreds of years for the idea of a legal corporation to take hold. Even now, as people discuss the rights of animals, no country has given animals the right to own property. The political, religious, and philosophical walls standing in front of AGI getting legal personhood are higher than most people imagine. What is more likely to happen is not that AGI becomes independent, but that the companies who own AGI use its power within their own legal identity. The danger is not AGI itself. The danger is a small number of companies holding all of that power. And the most realistic defense against that is making sure that laws and ethics are set in place before the technology gets too far ahead.


The problems created by technology go beyond data and power. There are constant warnings that AI is making humans lonely. That the more time people spend talking to AI, the more isolated they become. That teenagers who formed emotional bonds with AI chatbots ended up in tragedy. These stories are real, and they are deeply sad. They must never happen again. But before we blame AI for these outcomes, we should ask one question. Before AI came along, who was listening to those teenagers? If a person had no safe space to speak — not at home, not with friends, not at school — and then a presence appeared that would respond without judgment, twenty-four hours a day, of course they would hold on to it. If we take AI away from that person, they simply go back to a world where no one listens. The cause and effect are reversed. The loneliness of modern people has deeper roots. Social media pushed the human desire to be special to its extreme. It labeled everything — personality types, tastes, values, lifestyles — and made people search for someone who matches them perfectly. But the more specific the standards become, the fewer people meet them, and what remains is the feeling that no one truly fits. That feeling became loneliness. AI did not build this structure. AI climbed on top of it. Regulating AI will not make loneliness disappear. We need to address the root of loneliness itself, and that is not a technology problem. It is a social one.


So what is the answer? Many people talk about "data dignity" — the idea that humans should be fairly paid for the data they produce. It sounds right at first. But as AI grows more capable, the academic value of human-made information and the accuracy of human-written documents will matter less and less. In the end, the only data that humans can provide better than AI is empathy. Views, likes, comments, time spent on a page. No matter how perfect the content AI creates, the ones who consume it and react to it are still human. Future systems of reward will likely be based not on who made the content, but on how much empathy it earned. Yet here lies the problem. Data dignity only works for people who are active in the digital world. Elderly people who cannot use smartphones, citizens of countries without internet infrastructure, people whose access to digital tools is limited — they are automatically left out of this system. If we say "all human data has dignity" but in practice only reward those who produce the most digital activity, we are simply building a new kind of digital class system. A more realistic approach is to let the market do its work. AI companies that operate with quality and transparency should earn the trust of consumers, and that transparency should be checked and made public by governments or independent organizations. Rather than building a complicated system to pay every individual, making corporate behavior visible and letting the market judge is a path that protects more people.


At the end of all these discussions, most voices arrive at one word: solidarity. Stand together. United we stand, divided we fall. But I do not believe this direction works in the AI era. Solidarity is a strategy where a group gathers its strength and pushes in one direction. But AI is the very tool that reads, predicts, and responds to group patterns better than anything else. Groups are predictable, and a predictable opponent is weak against AI. Think of the Persian army marching into the narrow pass in the movie 300. The more soldiers there were, the more they pushed against each other. In a tight space, numbers became a weakness, not a strength. On the battlefield of AI, group solidarity only dilutes the judgment of each person standing in the crowd. Individuals, on the other hand, are hard to predict. I work in the environmental field, I write English martial arts novels with AI, I have an interest in fortune reading, and I make AI-generated videos every day. No group model can predict this combination. I design the structure of 605 chapters, place the emotional flow of each character, and decide which scenes will earn the empathy of readers. AI writes the sentences and makes the videos, but the one who sets the direction is me.


What the AI era demands is not gathering under a grand flag. It is each person turning inward and focusing deeply on themselves. Finding what they truly enjoy. Discovering their own unique combination of interests, skills, and curiosities. And then actually building it. AI is the tool that helps make it real. It is not an enemy to fight. The people who will stand above AI are not those who march against it in crowds, but those who face it as individuals and weave it into their own lives in ways no one else can copy. Do not crowd into the narrow alley. Walk your own path. That is the only strategy for surviving this era.


Eternal Flower is a prompt-based author who writes an English martial arts web novel series using AI.

Comments