● AI displaces skills, slowing growth
“More and more things are becoming hard to do because of AI” — How ‘Disskill Generation’ is changing the economy and education
1) The core point you need to know right now: When AI ‘replaces skill,’ the pace of growth can slow down
The biggest message of this content is this. Using AI can increase ‘productivity,’ but it can also reduce or even skip the ‘basic skills’ that students (learners) must build—that’s the warning. The result is the core context of what Professor Kim Jae-in calls the Disskill Generation (the 탈숙련/탈스킬 generation)—the transition away from skill.
In particular, there are two emphasis points. ① ‘Knowledge/skills regression’ where what you used to be able to do disappears ② ‘Growth skipping’ where a foundation is not built in the first place It’s that these two axes can progress simultaneously.
On top of that, the risk the professor worries about is “cognitive adult illness.” Just like adult illnesses that occur when you don’t exercise, it’s a metaphor for how if you don’t use the muscles of thinking, your ability to think can harden like an illness. Once you understand this frame, you can start to see clearly why ‘AI use’ isn’t just a simple pro/anti issue, but a question of “when/where/how” it should be used.
2) Terminology: Is the Disskill Generation a side effect created by ‘AI-native’ habits?
From Professor Kim Jae-in’s perspective, the Disskill Generation refers to this kind of situation.
- Target: Mostly applies more to the student stage (elementary, middle, high school, university)
- Structure: The ‘period of training to fill in the basics’ gets omitted
- Aspect 1: Outsourcing—by handing things to AI or tools and reducing what I need to do
- Aspect 2: Cognitive offloading—delegating the thinking process to tools
Here, the point isn’t that AI is a “bad tool,” but rather that at the stage where students should grow, the ‘engine of learning’ can be turned off. So it’s possible to skip the “basic filling-in period” of learning that used to accumulate.
3) Abilities that collapse especially in education: Writing (thinking process), debate, collaboration, and verification skills
There are specific items Professor Kim said “most of them collapse.” Among them, writing is at the center.
- Writing: Not only the output, but the entire thinking process up to writing (organizing, building logic, constructing evidence)
- Summarization: Risk arises if there’s no training for judging whether it’s ‘the correct answer,’ even if AI summarizes
- Debate/communication/collaboration: If thinking training becomes less, overall capabilities also shake
- Language competency (translation, etc.): Lack of ability to filter “seemingly good results”
In other words, if AI writes for you, “sentence generation” becomes faster. But the professor sees something more important than that: “the ability to understand properly and revise/verify.” If this verification ability becomes weak, education may not connect to actual competence.
4) The professor’s conclusion on the debate of “It’s okay to use AI vs you shouldn’t use it”: Separate ‘education’ and ‘work’
The professor advises that we shouldn’t judge the industry and education with the same yardstick.
- Industry (work) area: There are clearly sections where AI is useful as a ‘tool’
- Education area: Students are still building foundational abilities, so the trade-offs are bigger
Because, in education, AI use can create the price of less training of potential during the period that should be for growth, not just “convenience.” That’s why the professor even expresses that a “mandatory AI use in schools” approach could be violence.
5) The sharpest claim: “Don’t ask AI for knowledge” — If you change only the questions, the answers change too
Here, the professor’s experimental logic is quite intense. Repeating the same question while opening/closing the window can change the answer, and it can even change the content.
- The answer changes every time: It’s not a structure that converges on a single correct answer; it’s probabilistic sentence generation
- It can sound convincing even when it gives you the ‘wrong answer’: That’s why students may risk having low standards like “90 points is enough”
- The more important the topic, the greater the risk: If you adopt AI answers without having expertise, the loss can be large
What the professor emphasizes is that the model does not guarantee 100% truth. So the viewpoint follows that you shouldn’t believe in AI answers in the same way you believe in search (finding verifiable sources).
6) “Shouldn’t we just write better prompts?” The answer: Still, getting to 100 points is difficult
A counterargument naturally comes up. “Shouldn’t it become more accurate if we specify the prompt and request sources?” In response, the professor says that even attempts like RA (tools/response verification-related) have limitations, and that it’s still hard to guarantee truth that’s close to 100%.
So what’s needed in education isn’t a “convincing answer,” but the ability to catch mistakes on your own. That ability isn’t automatically developed just by using AI, and it requires training—this is the conclusion.
7) Rebuttal to the logic of “If you don’t use AI, you’ll be left behind”: The direction of the risk of being left behind can differ
The professor acknowledges that there are clearly areas where you can be left behind if you don’t use AI. But at the same time, they believe that there are also many more areas where you can be left behind by using AI.
- Examples of misuse: Cases where experts like judges or lawyers used AI answers based on those reasons, leading to disciplinary action or incidents
- Core logic: In important decisions, ‘review/verification’ is essential, but students at the education stage may lack verification ability
Another point is that it differs by industry. They draw a line against the idea of making AI mandatory in every job.
8) “AI pessimism vs can’t developers just talk?” — A humanist’s voice, and why it’s needed
In the end, this debate leads to a question of social responsibility. When asked, “Why is an unqualified humanist talking about AI,” the professor answers with a car analogy.
- Cars: It’s not only engineers who talk about quality; users also have the right to experience the ride comfort and driving feel and evaluate it
- AI is also a product: If it’s a service/product, society/users have the right to assess the impact and risks and to criticize
- Especially in education: Since there’s a risk of disrupting growth, a message of needing ‘minimization’ is required
Ultimately, the humanist’s role isn’t to make technology, but to promote warnings and review of the effects and risks that technology has on humans and society.
9) A realistic direction: “A route of verifiable information + verification training” matters more
The professor personally talks about habits of confirming sources via detours like Wikipedia/news the more important it is, rather than using AI as if it were a default.
- Confirm sources: The sources attached to AI answers can be nonsense
- Spot anomalies: It’s important to have reading comprehension that can “notice something off,” by looking at translation or sentences and recognizing awkward points
- Cross-verification: Run it through other tools and judge differences
From this perspective, AI isn’t a “correct answer generator.” The key is dividing roles between a draft/assistant and the verification part (people).
10) (Blog perspective) Core point to look at in the economy and employment: Generative AI is a productivity tool, but training methods can change employment structure
If we connect this from an economic outlook perspective, we get this picture.
- Generative AI (e.g., ChatGPT-type) lowers costs and speeds up repetitive tasks and draft work. So companies try to improve short-term productivity (value chain) through automation and efficiency.
- However, if education/training weakens ‘verification ability and thinking muscles,’ then in the long run there can be a phenomenon where the supply of skill decreases or the quality changes.
- Ultimately, if disskill becomes severe, a person’s role may weaken from “judgment/verification/accountability,” and that can turn into risk in specific job functions (planning, analysis, documentation, translation, law/compliance, etc.).
This flow can be summarized with keywords like AI automation, generative AI, digital transformation, productivity, and talent development. Especially from a company’s perspective, there’s a stronger tendency to see that ‘training and verification systems,’ more than mere ‘adoption,’ become part of competitiveness.
11) The most important additional summary that isn’t covered well elsewhere (the real bonus of this post)
Many people divide AI only into “useful/dangerous,” but the reason this interview is more frightening is because the form of the risk may not be ‘a mistake,’ but ‘a missing part of learning’.
To summarize, these are exactly the three core points.
- When AI provides the answers, students may end up doing less training to verify correct answers
- So competence may not actually accumulate; instead, people can adapt to “convincing outputs,” lowering their standards
- Misuse incidents aren’t limited to individual carelessness—they can spread as a risk across the organization and the entire work process
So, in the AI era, the question isn’t “whether to use AI or not,” but which stage (learning vs. work) and which role (draft vs. verification) should be left to people.
A single line readers of this article should take away right away
“Use AI as a tool, but design so that students grow not by getting ‘correct answers’ from AI, but by building ‘verification ability.’”
< Summary >
- Professor Kim Jae-in warns that AI use can create a Disskill Generation (loss of skills/basic omissions) in learners
- The problem isn’t just reduced productivity; it’s that thinking muscles like writing, debate, collaboration, and verification grow less
- Education and work should be separated, and mandatory AI use in education carries a high risk of missed growth as a trade-off
- AI answers are probabilistic, so even if you repeat questions, the answers can change, and it’s hard to guarantee truth at 100 points
- Ultimately, the key is an AI draft-use approach plus a system of human verification/confirming sources/cross-verification
- The humanist’s critique of AI isn’t about debating expertise; it’s about protecting the rights of consumers/users to warn and respond to social impacts and risks
[Related posts…]
- Latest on AI Education: Learning design in the era of generative AI
- Latest on Disskill: Changes in employment and productivity caused by disskill
*Source: [ 티타임즈TV ]
– “AI 때문에 할 줄 모르게 된 것들이 많아졌다” (김재인 경희대 교수)


