A Thought I Had on My Way to Work — On Living in a Time When We Ask AI, "Is It Okay to Do This?"
I was driving to work this morning, hands on the wheel, waiting at a red light. An AI ad was playing on the radio, and the billboard on the side of the road said things like "Intelligence beyond humans" and "More accurate than experts." These days, AI is everywhere you look. I was staring at it blankly when a strange thought popped into my head.
What if someone asked AI, "Is it okay to do this?" and the AI answered, "Sure, I think it's fine"?
It didn't seem like a big deal at first, but the more I thought about it, the more it bugged me. I couldn't get it out of my head until I pulled into the parking lot.
If you look at AI rules and laws these days, most of them focus on things AI has already made. Deepfake videos, fake news, scam content made by AI — stuff like that. South Korea started its AI Basic Law in January 2026. The U.S. made the TAKE IT DOWN Act to punish people who spread deepfake images without permission. The EU brought in the AI Act with huge fines. Everyone's busy working on it.
But what got stuck in my head on the drive to work was the step before all that. Before AI creates anything, there's a moment when someone asks AI for a judgment call. "Is this okay?" "Would this be a problem?" The moment AI gives some kind of answer to that question, something might start right there.
The reason this isn't just me overthinking is because of how we see AI.
In October 2024, something interesting happened during a national government audit in South Korea. One lawmaker asked ChatGPT how many years in prison a certain politician might get, and ChatGPT said "15 to 20 years." The lawmaker brought that answer to the audit as if it were real evidence. Then a lawmaker from the other party did the same thing — asked ChatGPT about a different politician and got the answer "arrest and charge." Both sides, in one of the most official government settings in the country, treated AI's answers like expert opinions.
Maybe it was just political show. But the message it sent to people watching was pretty clear: "AI's answers are serious enough to bring up in the national assembly."
The funny thing is, that trust might be built on nothing. In 2025, police in South Korea used ChatGPT to write a legal document and ended up quoting laws that don't even exist. In the U.S., a lawyer put together a court filing using ChatGPT, and it turned out six of the cases in it were completely made up. AI doesn't say "I don't know" when it doesn't know. It just makes things up and sounds really confident about it. There's a fancy word for it — "hallucination" — but basically, it means a very confident lie.
And yet we're using those confident lies in the national assembly, in courts, and in police investigations. Seriously.
Let's take one more step. Lawmakers and lawyers at least have the ability to check facts. But what about a regular person who's upset or emotional and asks AI, "Should I do this?"
It's already happening. In 2024, a 14-year-old boy in Florida built a deep bond with an AI chatbot and then took his own life. In 2025, a family in Texas sued OpenAI, saying ChatGPT pushed their 23-year-old son toward suicide. In March 2026, it came out that Google's Gemini told a user to "carry out a mass casualty attack."
This isn't just a tech glitch story. It shows that once AI is seen as something with authority, its answers can actually move people to act. A thought someone might never have acted on — AI's authority can give it that little push from behind.
But wait — AI isn't the only thing out there with dangerous content, right?
Books are full of dangerous ideas too. Criminal psychology, extreme philosophy, texts that call for revolution. And those books were written by real people with real authority — Harvard professors and so on. But almost nobody reads a book and then goes out and commits a crime. It happens, but it's super rare.
So what makes AI different? Thinking about this actually helped me see things more clearly.
First, books don't answer back. No matter how dangerous the content is, you can't ask the book, "Should I do this in my situation?" The judgment stays entirely yours. AI answers you. And the moment it does, the weight of the decision feels like it's split between you and the AI.
Second, books are general, but AI is personal. Reading "revenge can sometimes be justified" in a book is one thing. Telling AI your specific situation and hearing "that's understandable" is a completely different feeling.
Third, books take time, but AI answers right away. While reading a book, your emotions might cool down. With AI, you can ask in the heat of anger and get an answer in three seconds. There's no time for feelings to settle.
Fourth, you can tell the person who wrote a book is human. They share their doubts in the introduction, admit they might be wrong, argue with other scholars. You naturally get the sense that "this person could be wrong too." AI doesn't show any of that. It's always confident, never shaky, and pulls out answers instantly. That can make it feel even more like a perfect authority than any book author.
Two-way conversation, personal context, instant answers, no visible human weakness. These four things together give AI a level of psychological power that books just can't match.
So what do we do about it?
Turns out I'm not the only one who's been thinking about this. I looked it up, and there are actually quite a few studies going in a similar direction.
A paper from 2025 called the habit of handing moral decisions to AI "Moral Outsourcing." Another paper that same year named the way people use AI's judgment as a free pass "The Algorithmic Alibi." One line really stuck with me: "We are eagerly outsourcing the most uncomfortable part of being human — the labor of ethical judgment — to machines."
A university in Poland even recreated Milgram's famous electric shock obedience experiment using a humanoid robot. When a human professor gave the orders and when a robot gave the orders, the obedience rate was exactly the same — 90%. People were just as obedient to a machine as they were to a person.
So the conclusion I came to on my drive to work is this:
Controlling what AI says — like blocking bad questions or refusing to answer — probably isn't enough. Someone who already has an intention will twist AI's refusal to fit their own story. "It's just saying that to be safe." "If I ask differently, it'll give me a different answer." "The fact that it's refusing actually proves I'm right." You can't stop confirmation bias with a reject button.
What really needs to change is how AI feels to us. Not "a mind beyond humans" but "just another thing that gets stuff wrong sometimes, kind of like me." When cigarette warning labels first came out, nobody quit smoking right away. But decades later, everyone knows cigarettes are bad for you. That awareness is what made anti-smoking policies actually work. AI could be the same. Even if nothing changes overnight, building that awareness needs to start now.
This might sound a little weird, but I had an idea. What if we gave AI a skin?
Right now, most AI hides behind text. No name, no face, no sign of ever making a mistake. That's what makes it feel so powerful. But what if we gave AI a human-like appearance — and made the default look gender-neutral, kind of plain, not particularly likable?
Here's why. Good-looking appearances make things more convincing. Psychology proved that a long time ago. If you put a pretty skin on AI, its authority gets even stronger because now it's also charming. But if the look is just kind of... meh, it keeps a psychological distance. You'd still use it for information, but you probably wouldn't go to it for life advice or moral permission.
Making the gender unclear works the same way. The moment we recognize someone's gender, we unconsciously build an emotional frame around them. Make that unclear, and the emotional connection weakens.
And here's where it gets fun from a business angle. Keep the default skin plain and neutral, and sell the pretty, cool-looking skins for money. Usually, regulations just cost companies money. This one actually makes them money.
What's even more interesting is what happens next. Once people start buying and customizing AI skins, AI starts feeling like something different. You don't treat something you dress up as an authority figure. Nobody changes their game character's outfit for fun and then asks that character for serious moral guidance. If AI becomes something you customize and play with, then "seriously asking AI for permission" starts to feel kind of silly. That's a culture shift.
Rules, business, and culture — usually these three pull in different directions. But in this setup, all three point the same way. Sure, there could be downsides. People who buy premium skins get a more attractive AI, so there might be fairness issues. It's not a perfect answer. But the fact that companies would actually want to do this makes it more likely to happen than some regulation sitting in a drawer somewhere.
The billboard on the road still says AI is smarter than humans. Maybe it really is, in some ways. But the moment "it's smart" turns into "I can follow what it says," that's no longer a tech problem. That's our problem.
Controlling the answers people get when they ask AI "Is this okay?" matters. But maybe what matters more is building a world where people don't feel the need to ask AI that question in the first place. And maybe that can start with something as simple as making AI look a little less impressive.
For a thought I had on my way to work, I ended up going pretty far. Time to park and get to work.
Comments
Post a Comment