AI Shockwave – Humans Must Read Or Innovation Dies

·

·

● AI Crunch – Jobs Wiped, Profits Soar

The era when AI reads and writes, the most realistic answer to “Why do I still have to read?” (Key takeaway from Professor Eunsoo Lee, Seoul National University)

In today’s piece I will not just talk about “AI will replace humans/won’t replace humans”.
I will summarize four core questions that are immediately felt in the field.

1) In an era where AI finds patterns, who is the subject of “discovery”?
2) If AI does all the “collection/summary”, where does human competitiveness come from?
3) In an era when machines read (it reads), why does the ability for “me to read” become more important?
4) Now that AI counseling is normalized, what is the uniqueness of human “communication”?

And at the end I will pick out and summarize the “really important points” that news or YouTube rarely touch on.
(If you are a leader/planner/analyst building an AX roadmap, this is where the difference is.)


1) News briefing: Professor Eunsoo Lee’s redefinition of “human intelligence” in the AI era

1-1. core point: Human uniqueness is not a ‘constant’ but a ‘variable’

In the past, “things only humans can do” felt like fixed abilities (constants).
But after generative AI, reality is closer to the opposite.
Each time technology changes, humans have redefined roles, and the ability to redefine oneself is itself the human uniqueness, from this perspective.

This also has big meaning from a global economic outlook perspective.
The faster productivity innovation proceeds, the more the essence of jobs (responsibility/planning/meaning-giving) becomes expensive.

1-2. Why ‘history’ is needed: Look at the “direction (vector)” rather than predicting the future

AI seems like an unprecedented technology, but humans have always made tools to extend intelligence.
Writing, the abacus, calculators, search, Wikipedia… and now generative AI.

The professor’s point is this.
Do not conclude the future from a single point; look at the direction in which the technology-human relationship moves using the past.
This becomes the basic frame for designing a corporate AX (work transformation) roadmap.


2) Breaking down human intelligence into four verbs: discover·collect·read & write·communicate

2-1. (Discovery) In the AI era humans become ‘secondary eyewitnesses’ rather than ‘primary eyewitnesses’

In the past, discovery meant I directly observed nature (primary eyewitness) and found “laws/patterns”.
Nowadays data is so vast that AI first extracts patterns and humans view and interpret the results.
Hence the expression that humans become “secondary eyewitnesses”.

Does this mean the human role disappears?
The claim is that it actually remains sharper.

What humans must do is “planning”.
Deciding what world to explore, what hypotheses to form, what variables and scope to design.
This goes beyond simple task automation and is directly linked to where decision rights remain amid industrial structural changes.

As a reference, this point is also important from an investment perspective.
While AI’s ability to find patterns spreads, the ability to decide “what problem to solve” becomes scarce.
Ultimately the premium shifts to planning/product/strategy.

2-2. (Collect) What AI gathers is a ‘meal kit’; the final touch is the human ‘chef’s touch’

One fundamental reason people like AI is because of the fatigue of collecting.
Information is so abundant that “filtering” becomes more important than “searching”.

AI is strong at comparing vast information and producing plausible answers.
But the professor likens AI’s output to a “meal kit”.
On average it’s decent and the basics are solid, but it’s not “my own dish” by itself.

Humans give “meaning” here.
Why this information matters to me.
What to discard and what to keep.
Whether to connect A and B to create a new perspective.

This is also the paradox after productivity innovation these days.
If everyone has the same information and the same summary, differences arise from “selection and editing”.

2-3. (Read & Write) In the “it reads” era: If I don’t read, there is no reason for us to gather

This is the passage the professor strongly warns about.
As summaries become ubiquitous, students and organizations all end up saying “similar things.” Why? Because individuals are not reading texts thoroughly themselves.

For companies this is truly frightening.
If everyone comes to meetings with only AI summaries, the conclusion becomes “the sum of averages.”
Often what is needed in the field hides in the periphery, and that is easy to miss.

Summarization/translation has clear advantages.
They are extremely useful for the first judgment of whether something is worth reading.
But if “reading” is ended, nobody knows how cognitive structures will change — the professor uses Wegovy (a weight-loss drug) as an analogy.
Short-term efficiency is certain, but the long-term cumulative effect is unknown.

2-4. (Communicate) As AI counseling increases, the ‘finiteness/imperfection’ of human communication becomes more valuable

The professor mentions that AI is now among the top uses for psychological counseling/conversation.
This signals that people are looking for “someone to listen” that much.

So does AI replace human communication?
The professor’s view is different.
The uniqueness of human communication comes from the “finiteness” that creates urgency, and the “imperfection” that creates depth.

AI gives smooth, average responses, but relationships are also a history of friction and adjustment.
Failing, misunderstanding, and readjusting are the skills of relationships.
He questions whether turning that into a “frictionless experience” really develops relationship ability.


3) Field-perspective interpretation: Points you can use immediately in an AX roadmap (2026)

3-1. The standard becomes not “AI does it” but “Can I take responsibility for it?”

It is hard to split how much of an AI-collaborated result is truly mine in percentages.
The professor presents a clear standard.

Can I put my name on it and publish it externally?
Am I prepared to take responsibility if problems arise?
And above all, can I explain why I did it that way?

This is directly connected to corporate AI governance/risk management.
Ultimately the operating system should be built not on regulations but on “explainability and responsibility allocation.”

3-2. ‘Discovery’ work becomes more important and more expensive

As AI replaces analysis, the person who decides “what to analyze” captures the upstream.
That is, planning, problem definition, experiment design, and data collection design become core competencies.

This trend accelerates as digital transformation moves to the next stage (AX).
Generative AI speeds up execution, and the cost of being on the wrong path grows accordingly.

3-3. ‘Reading debt’ gnaws away at organizational competitiveness

If a culture of only reading summaries accumulates, organizations gradually move toward “lots of good-sounding words but no decisions.”
There is a paradox where the more glamorous the reports, the less clear the message becomes.

So what leaders must do is simple.
Not primitive rules like “no summaries allowed”, but a structure that rewards people who directly read and interpret at least some core point texts.


4) The “most important points” that other YouTube/news outlets rarely mention (core points summarized separately)

4-1. The biggest risk in the AI era is not ‘skill degradation’ but ‘diversity collapse’

If you rely on AI summarization/generation, outputs increasingly converge toward the mean of a normal distribution.
Perspectives within the organization become similar, meetings become similar, products become similar.
This kills innovation in the long run.

4-2. The moment you become a “secondary eyewitness”, human responsibility grows larger

When AI says “there is a pattern”, it is ultimately humans who decide to believe and make decisions based on it.
In other words, as automation increases, responsibility does not decrease but shifts upward.

4-3. The real standard for AI collaboration is not ‘copyright’ but ‘explainability’

When practitioners keep hiding behind “This was written by AI,” trust in work quickly erodes.
Conversely, if you can use AI and still explain the logic and evidence in your own words, that becomes your capability.
This difference is likely to determine salaries in the 2026 talent market.

4-4. The essence of AX is not “task automation” but “the reallocation of human agency”

Automation is a means.
The core is reorganizing who defines problems (planning), who assigns meaning (editing), and who takes responsibility (governance).
If you fail at this, digital transformation will be done but without results.


[Related posts…]


< Summary >

Human uniqueness is not a fixed ability but a ‘variable’ that redefines itself according to technological change.
Human intelligence in the AI era is organized into discovery·collection·read & write·communication, and human agency remains especially in “planning” and “meaning-giving”.
The outputs AI provides are like meal kits, and the final interpretation·editing·responsibility must be handled by humans.
As summarization/generation increases, organizations may homogenize and diversity can collapse, so “the ability to read and interpret personally” becomes a competitive advantage.
The standard for AI collaboration results is not a percentage but whether “I can publish it under my name, explain it, and take responsibility for it.”

*Source: [ 티타임즈TV ]

– 기계가 아니라 왜 ‘내가’ 읽어냐 하나? (이은수 서울대 철학과 교수)


● AI Crunch – Jobs Wiped, Profits Soar The era when AI reads and writes, the most realistic answer to “Why do I still have to read?” (Key takeaway from Professor Eunsoo Lee, Seoul National University) In today’s piece I will not just talk about “AI will replace humans/won’t replace humans”.I will summarize four core…

Feature is an online magazine made by culture lovers. We offer weekly reflections, reviews, and news on art, literature, and music.

Please subscribe to our newsletter to let us know whenever we publish new content. We send no spam, and you can unsubscribe at any time.