● Americas AI Manhattan Project, 24 Big Tech War Alliance, Vertical AI Stack, Structural Stock Winners
The Real Reason the U.S. “AI Manhattan Project” Is Scary: A 24–Big Tech Alliance, AI Vertical Integration, and the Stock Market’s “Structural” Beneficiary Sectors—All in One Clean Breakdown
Today’s post contains exactly three things.
1) Why the announcing entity was not the Department of Defense but the “Department of Energy (DoE),” and how symbolic that is.
2) Group the disclosed 24 companies into “AI models → semiconductors → cloud → platforms → diffusion ecosystem,” and identify where money tends to stay the longest.
3) The core point that other news outlets/YouTube rarely spell out: within the “bubble vs. national security” frame, some companies end up in positions that are structurally “hard to let fail.”
1) News Briefing: Why Did a Second “Manhattan Project” Erupt in AI?
Core point headline
The U.S. government (Department of Energy) revealed a collaboration with a “Genesis Mission” character that directly links private-sector frontier AI to national security infrastructure.
Rather than simple R&D support, it can be interpreted as moving toward incorporating private AI models as strategic assets (effectively weapons-grade).
Why the Department of Energy (DoE), of all places?
The core point is “symbolism.”
The Manhattan Project for atomic bomb development in the 1940s also moved along the energy line through its predecessor organizations, and this time sends a strong message that “they see AI as that level.”
In other words, an official perspective shift has been made: AI is no longer seen as “software that makes money,” but as a “tool of national hegemony.”
What this means at the macro level
The AI hegemony race is being reorganized into an all-out war that includes not only technology competition, but also supply chains (semiconductors), infrastructure (data centers), power, and data governance.
As this trend meshes with global supply-chain restructuring, investment may shift weight from “one-off themes” toward “structural demand.”
2) Reclassifying the 24 Companies Into “5-Stage Vertical Integration” (The Easiest Map to Understand)
The gist of the original text is simply this.
The U.S. is bundling AI into a “vertically integrated ecosystem” and turning it into a strategic asset.
① AI Model (Brain) Layer
OpenAI, Anthropic, Google, xAI
The “frontier model Big 4” axis the market currently feels most directly.
Because this layer can connect directly to national security/defense/intelligence domains, it is likely to become tightly coupled with regulation, security, and procurement.
② Semiconductor (Muscle) Layer
NVIDIA, AMD, Intel + (Cerebras, Groq, and other specialized-chip startups)
The key takeaway to watch here is not just “who makes the faster chip.”
Once it enters the frame of “government procurement, security standards, export controls,” a technical edge can be amplified into a policy edge.
This can ripple across the broader semiconductor supply chain (packaging/networking/servers).
③ Cloud/Data Center (Heart) Layer
Microsoft, Amazon, Google + Oracle, CoreWeave
AI is ultimately a “computing infrastructure competition.”
No matter how good a model is, without data centers that can reliably run large-scale training/inference, it is game over.
In particular, the rise of Oracle and CoreWeave as “AI-specialized infrastructure” reflects different revenue structures from general-purpose cloud (high-density GPUs, high-bandwidth networks, scheduling optimization).
④ Hardware/Server/Storage (Blood Vessel) Layer
HPE (Hewlett Packard Enterprise) and other server/storage/supercomputer build-out companies
An AI data center is not completed by simply plugging in GPUs; it requires “integrated systems” including networking, storage, cooling, power, and operations software.
From this point, classic AI infrastructure investment (= AI data center investment) starts to show up in earnings in earnest.
⑤ Platform/Application/Consulting (Diffusion) Layer
Palantir, Accenture, IBM (including quantum), and others
Governments and large enterprises do not buy a “model”; in the end, they buy “outcomes that change work.”
Data integration, security, workflow transformation, and operational automation are core, and this segment can become strongly lock-in driven over the long term.
3) Market Reaction Points: Why Are Oracle and CoreWeave—Which Recently Fell—Being Mentioned Again?
Issue 1) Infrastructure stocks that stood at the center of the “AI bubble” debate
As mentioned in the original text, Oracle/CoreWeave fell sharply from their peaks and were suspected under the narrative that “AI demand is being exaggerated.”
Especially for Oracle, when stories like credit risk (CDS) circulate together, market sentiment can freeze rapidly.
Issue 2) But if the frame shifts to “national-security grade,” the game changes
From here is the key takeaway from an investor’s perspective.
If AI moves from a private-sector growth theme to a national infrastructure (quasi-public good) character, the “floor of demand” changes.
Even if a recession/earnings slump hits, there emerges an area supported by policy, procurement, and security budgets.
Issue 3) The CoreWeave “ClusterMAX 2.0” story (only the essentials)
SemiAnalysis rated AI operating capabilities of data centers/cloud providers, and the point was that CoreWeave is the only “Platinum” grade for two consecutive years.
What matters here is “how well you keep GPUs from sitting idle (utilization/scheduling/efficiency).”
In the AI era, “GPU time” is currency, so if efficiency is high, customers flock in, and if customers flock in, you secure the newest GPUs faster—creating a virtuous cycle.
Issue 4) Oracle and the TikTok deal (from a data sovereignty perspective)
Oracle has long been entangled with the TikTok U.S. data management issue, and now the narrative is expanding toward U.S. joint venture/operational control.
This is not just an advertising/app issue; it connects to “data governance within the United States.”
AI runs on data as fuel, so where data is stored and who controls it becomes national strategy.
4) Macro View: Why This Investment Is Likely to “Keep Going” Even Through 2026
① The China factor: an “AI Sputnik moment”
China has declared it will be No. 1 in AI by 2030, and in reality its model/application/industrial deployment capability is rising quickly.
From the U.S. perspective, similar to how the Soviet Sputnik led to NASA/DARPA, AI now plays that role.
② AI is not a single industry; it attaches to “whole-economy productivity”
Manufacturing, robotics, finance, healthcare, defense, and energy all play the productivity game through AI.
That is, it can attach not to a specific sector fad but to a productivity investment cycle across the entire economy.
③ So “recession vs. AI investment” can happen at the same time
Traditional industries may slow down while AI infrastructure investment continues under the justification of security/hegemony.
If this combination appears, markets become more sensitive to “policy + infrastructure CAPEX” than to “earnings.”
In that process, macro variables like rate-cut expectations, renewed inflation debates, and dollar strength can also swing together.
5) (Important) The Core Point Others Rarely Cover: What Matters More Than “Bubble or Not”
Core point 1) It shifts from “Too big to fail” to “Too strategic to fail”
During the Lehman crisis, the logic was “financial firms that are too big can’t be allowed to fail,” but now it can become “AI infrastructure that is too strategic can’t be allowed to collapse.”
Especially once it connects to national security, intelligence, defense, and power infrastructure, the cost of a private company’s failure becomes a “national cost.”
Core point 2) The winners of vertical integration are not only “models”
The public usually focuses only on chatbots/models, but the places where money stays the longest are likely to be infrastructure (cloud/data centers/power/cooling/networking).
Because even if model pricing falls due to competition, power, GPUs, and data centers face physical constraints, meaning low supply elasticity.
Core point 3) Going forward, risk will be decided more by “policy/procurement/security certification” than by “technology”
The closer it becomes to a national project, the more security, data sovereignty, certification, and procurement references matter as much as performance.
In other words, not “a good product,” but “a product/company that is easy for the state to use” can receive a premium.
6) Investor Checklist: A Realistic Way to Separate Winners From Losers Going Forward
① Infrastructure efficiency metrics (utilization/scheduling/network bottlenecks)
In AI, competitiveness is about how efficiently you run GPUs.
② Power/cooling/land (physical constraints of data centers)
In AI data center investment, the bottleneck is often ultimately electricity and cooling.
③ Government procurement/national security references
Once you are in, contracts tend to become long-term by structure.
④ Not valuation, but “financing structure”
Especially for infrastructure companies, CAPEX is large, so they can be vulnerable to rates/credit spreads.
So you should separate “a good theme” from “a good financial structure.”
⑤ The China variable (export controls/supply chains/ally CAPEX)
The more the AI hegemony war intensifies, the faster global supply-chain restructuring can proceed.
7) “Highest-Frequency” Economic SEO Keywords Naturally Embedded in the Article (With Context)
This issue is likely to move as a set with keywords like inflation, rate cuts, recession, global supply chains, and AI data center investment.
Especially if a policy momentum attaches, medium-to-long-term CAPEX flows may move stock prices more violently than short-term earnings.
< Summary >
A Department of Energy–led collaboration that directly links private AI to national security infrastructure—on the scale of an “AI Manhattan Project”—has been revealed.
The participating companies are bound into a 5-stage vertical integration structure spanning AI models, semiconductors, cloud, servers/storage, and platforms, and the axis where money stays the longest is likely to be infrastructure (data centers/cloud/power).
The core point is that beyond the bubble debate, there will be companies/areas that are “too strategic to let fail,” and due to the China variable, investment is likely to continue through 2026.
[Related Posts…]
- The Global Investment Map Reshaped by the AI Hegemony Race
- AI Infrastructure Beneficiaries Through the Lens of Data Center Power Issues
*Source: [ 월텍남 – 월스트리트 테크남 ]
– 제2의 “맨해튼 프로젝트” 정부가 밀어주는 24개 관련주 ㄷㄷ
● Near Instant Google AI Translation Shatters Language Barriers Sparks Cloud Wearable Disruption
0.1-Second “Simultaneous Interpretation” Becomes Reality… The Market Google’s Real-Time Interpreting Will Reshape, and What Korean Companies Must Prepare Right Now
This article covers the following.
1) Why this Google interpreting update is not “just another translation feature” (the technical architecture is completely different)
2) The disruptive force Gemini Flash creates in cost/speed/performance (linked all the way to AI infrastructure and cloud competition)
3) How language barriers disappear as it moves beyond meetings, call centers, and travel into “AI glasses” (a reshaping of the wearable market)
4) The core point most other news misses: why “preserving intonation/emotion” is a game changer for business communication
5) A practical checklist from a Korea (company/individual) perspective: opportunities and risks, and response strategies
1) Today’s core point (one line)
As Google implements “real-time simultaneous interpretation with virtually no delay,” we have reached an inflection point where translation shifts from a ‘tool’ to a ‘default interface.’
This change goes beyond generative AI competition and is likely to shake up both cloud computing cost structures and the wearable (especially AI glasses) market landscape at the same time.
2) What changed: It “compressed” the three steps of conventional interpreting
The most important point in the original text is this.
Conventional interpreting was typically a three-step pipeline.
- Speech → Text (ASR, transcription)
- Text → Translation (Translate)
- Translated text → Speech (TTS, read-out)
Because this structure passes through multiple middle hubs like a parcel delivery system, it inevitably feels like “the translation comes out long after the speaker finishes.”
But the direction Google is pushing this time is to minimize or bypass the “text conversion” in the middle and move closer to a method that goes from sound directly to meaning units (semantic) and then outputs speech in another language immediately.
In other words, like human simultaneous interpreters, the moment a “chunk of meaning” is captured, the translation starts flowing out.
3) Why a 0.1-second-level experience became possible: “semantic vectors” + “predictive translation”
If we simplify what the original text calls a “semantic vector,” it’s this.
Instead of transcribing words into text, the AI captures the “intent/meaning” directly from sound.
The real core point here is that the common sense of “you translate only after the sentence ends” breaks down.
Especially for language pairs with completely different word order like Korean↔English, waiting until the end of the sentence (the verb) is inevitably slow.
With this approach, it looks at context/intonation/word combinations and starts translating earlier by probabilistically predicting what will come next.
This is not just speed bragging; it changes the communication UX itself.
In “conversation,” timing is everything, so even a 1–2 second lag makes people feel interrupted and awkward.
4) Why Gemini Flash is scary: “cost-effectiveness” becomes market dominance
The original text emphasizes radical improvements in the Flash (lightweight model) line.
The key is that it is breaking the formula that “only flagship (large) models deliver good performance.”
- Speed: advantageous for real-time interpreting and conversational services (latency is competitiveness)
- Cost: if API unit prices fall, enterprise adoption surges (call centers/meetings/education markets move immediately)
- Performance: if it’s not just “cheap but usable” but “cheap and quite strong,” the game flips
At a macro level, this is a turning point where generative AI moves from “experimentation” to “operations.”
Companies only roll it out company-wide when costs fall.
From there, AI semiconductors, data center investment, and cloud computing competition naturally heat up together.
5) The real reason emotion/intonation preservation matters: it touches 90% of business communication
Most news stops at “wow, translation got faster,” but the bigger meaning in the original text is preserving the speaker’s tone/intonation/emotion.
The reason machine translation feels awkward is not only word accuracy,
but because the “emotional line dies,” reducing persuasiveness and trust.
- Sales/negotiation: even the same sentence, the tone can determine revenue
- Leadership/organizational communication: intonation determines context
- CS/call centers: when emotion is conveyed, customer churn drops
In other words, this is no longer a fight over translation accuracy; translation is starting to enter the “relationship-building” level.
6) Where it gets applied first: meetings → call centers → education → travel
From a real-world usage perspective, in the order of what monetizes fastest, it looks like this.
- Video conferencing/real-time meetings: immediate utility in Zoom, Google Meet, etc. (global collaboration costs plunge)
- Call centers/counseling: structurally alleviates multilingual staffing issues (operational efficiency improves)
- Education/tutoring: removes language barriers in 1:1 conversational lessons (content exports expand)
- Travel/on-site work: enables real-time communication for business trips, on-site installation, and after-service
What matters here is that this is not just a convenience feature; it can be a cost-cutting mechanism that changes a company’s global operating model itself.
In an inflationary environment, companies have become as sensitive to “cutting operating costs” as to “growing revenue,” and tools like this get budget approval quickly.
7) The next step: AI glasses could become the “new smartphone”
The future the original text strongly points to is AI glasses (wearables) with even bigger impact than earbuds.
- The other person’s speech appears as subtitles in front of your eyes (AR translation)
- Signs/menus/placards are translated in real time and “overlaid onto reality”
- A natural conversational UX is possible even without earbuds
If this becomes commercialized, language shifts from a “skill” to a “device option.”
And from that moment, the market moves from smartphone app competition to wearable OS/ecosystem competition.
From an investor perspective, if wearables and AI truly interlock in earnest,
it is highly likely that the hardware value chain moves together: components (microphones, cameras, displays), batteries, and on-device inference chips.
8) The “most important content” that other YouTube/news rarely talks about
8-1) “Language integration” is not a tech event; it is a labor market and education market reallocation event
You should not take “no need to study foreign languages anymore” lightly.
The real change is that hiring and evaluation criteria will shift.
- The premium on language test scores may decline
- Instead, “domain knowledge + persuasiveness + problem-solving” will be valued much more
- As global collaboration becomes easier, the competitor pool expands worldwide (an opportunity and a pressure for individuals)
8-2) If interpreting becomes cheap as an “API,” the winners are not translation apps but “work tools”
People often think, “the translation app market will grow,”
but in reality, the ones that make money are more likely to be work platforms where translation is embedded.
- Meeting tools (subtitles/minutes/action items automation)
- CRMs (automatic translation/summarization of sales call records)
- Help desks (ticket classification and response automation)
In short, translation becomes a feature, and the platform takes the revenue.
This flow also aligns with the direction in which global supply chains are being digitized right now.
8-3) Risks also grow: “speech manipulation” and “liability for mistranslation” become legal/compliance issues
Translation that preserves emotion and intonation is not all upside.
- The risk of being conveyed “stronger/weaker” than intended
- In contracts/legal/medical contexts, responsibility becomes complicated when mistranslation occurs
- If combined with voice-based deepfakes, “trust collapse” issues expand
When companies adopt it, they must also prepare policies for security/logging/audit trails, and practices like “store the original language for critical conversations.”
9) Practical response checklist for Korean companies/office workers
9-1) From a company (team lead/planning/IT) perspective
- Teams with many overseas meetings should consider adopting “real-time interpreting + automated meeting minutes” as a package
- Call centers/CS should calculate ROI on whether multilingual coverage can be shifted from headcount expansion to automation
- Data security policy: check where voice data is stored and whether it is used for training
- Overseas market expansion teams should quantify how much “local partner communication cost” can be reduced
9-2) From an individual (office worker) perspective
- Rather than language itself, investing in “presentation ability/logic/domain expertise” may deliver higher returns
- In meetings, the habit of speaking “briefly, in meaning units, starting with the key point” improves interpreting quality
- For important negotiations, trust interpreting, but reconfirm key agreed wording in writing (to prevent misunderstandings)
< Summary >
Google’s real-time simultaneous interpretation has evolved by compressing the traditional speech→text→translation→speech structure, moving toward streaming translation directly at the meaning-unit level.
As lightweight models like Flash raise speed, cost, and performance at once, interpreting is likely to be absorbed not as an app but as a default feature of work platforms.
In particular, preserving intonation and emotion is powerful because it translates even the persuasiveness of business communication, and when combined with AI glasses, language barriers could become a “device option.”
However, liability for mistranslation, voice security, and deepfake risks also grow, so companies must prepare compliance frameworks in parallel.
[Related Posts…]
- How the Spread of Generative AI Is Redrawing the Map of Jobs and Productivity
- In an Era of Exchange-Rate Volatility, Three Response Strategies Companies Must Prepare
*Source: [ 월텍남 – 월스트리트 테크남 ]
– 0.1초만에 동시번역, 이어폰 하나로 “모든 언어 통합”
● Rate shock sparks market panic
For Non-Developer Office Workers Starting “AI-Powered Work Automation,” What Really Matters Isn’t Coding but “Reproducible Instructions” (Key Takeaways from the Vibe Coding Conference Pilot)
This article includes the following.
1) A realistic starting point for “work automation” that non-developers can use immediately (the pre-coding stage)
2) A prompt structure for turning unstructured data like comments/reviews into an “analysis-ready table” with ChatGPT
3) A repeatable workflow in Google Sheets that extracts sentiment/needs/keywords all at once using AI functions
4) Why you need to understand data analysis in the AI era (how to escape “luck-based analysis”)
5) A separate summary of the “core point that makes automation actually run,” which other YouTube/news sources rarely mention
1) News Briefing: The Essence of “Non-Developer Automation” in This Pilot Talk
[Key Takeaway]
To do less work with AI, you first need to clearly define “the result you want (success criteria),” then break it down into a repeatable/reproducible form.
In other words, before learning to code, you should standardize “how you assign work to AI” first.
[Why this matters]
If you just say “analyze this,” it might come out well sometimes, but fail the next time.
Work needs to run in a similar way every time.
So this talk, in one sentence, showed “how to use AI not as luck, but as a system.”
2) Why You Should Learn Data Analysis First: The Answer to “AI Can Do It, So Why Bother?”
[The analogy from the talk was quite accurate]
Back when ChatGPT went viral, there were many “make a children’s storybook” projects, but most didn’t survive.
Because they focused only on “generation” without defining the essence of a story (the moral).
[If you translate this to work automation, it becomes this]
The essence of data analysis is usually two things.
1) Understanding the current situation: what is happening right now
2) Prediction/decision-making: so what should we do next
AI is great at “generation,” but if you don’t pin down the context of what decision you’re trying to make and what a good outcome looks like (success criteria), the answer will wobble.
This perspective also connects with the productivity debate happening in companies these days.
In many cases, the ROI from adopting AI shows up first not by “reducing work,” but by improving decision quality.
3) Practice Flow (Immediately Transferable to Work): Turning YouTube Comment Analysis into an “Automation Pipeline”
The talk used YouTube comments (unstructured data) as an example, and this applies directly to customer reviews/survey comments/VOC/internal feedback.
[STEP 1] Clean the raw comments into “table-shaped data”
If you scrape comments from YouTube, junk like like-buttons gets mixed in, so you have ChatGPT clean it up “so it’s easy to move into a spreadsheet.”
Prompt Structure (Core Point Only)
1) Instructions (labeling)
2) One-line objective (success criteria)
3) Output format (table/defined columns)
4) Paste the original text data
The important core point here is “lock in the objective in a single sentence first.”
[STEP 2] Before “assigning a role,” have AI recommend “expert roles that fit my context”
This was an interesting part.
Usually we say, “You’re a data analyst,” and lock in a role, but beginners don’t even know whether that role fits.
So first, you ask AI: “Recommend 5 experts who can interpret these comment data properly.”
In other words, instead of “me deciding the role,” you first go through a process of finding roles based on the data and the objective.
The reason this approach is good is that the texture of the output changes dramatically.
Not only a data science perspective, but lenses closer to the essence of the problem—like organizational culture, psychological safety, and sociology—get blended in.
[STEP 3] Use the selected expert (e.g., Amy Edmondson) to design the analysis methodology
In the talk, they assumed an expert lens around “psychological safety,” and proceeded by splitting comments into three questions.
Examples included categories like “why people couldn’t speak up / why they couldn’t trust each other / why they hide weaknesses.”
The important core point here isn’t the “categorization itself,” but that it connects directly to content chapters (planning).
That is, they designed it so the analysis doesn’t end as a report but leads to decision-making (content planning).
4) Building “Repeatable Automation” with Google Sheets: Using AI Functions/Gemini
ChatGPT conversations are convenient, but hard to run repeatedly.
This is where the practical core point of the talk was.
[Core Point] Use AI “like a function” inside a spreadsheet for automatic categorization
For example: classify comment sentiment into a predefined list (anger/frustration/empathy/neutral, etc.), then double-click to apply it to all rows below.
When this structure exists, here’s what becomes good.
1) Even if comments grow from 100 to 1,000, you can process them the same way
2) Results remain in a table, making charts/pivots/summaries easy
3) Team sharing/collaboration becomes possible (work automation scales not individually, but at the team level)
Additional Automation Examples (mentioned in the talk)
– Summarize the commenter’s needs in one sentence
– Extract core keywords (connectable to thumbnail/title candidates)
– Generate multiple recommended content outlines based on comments
This flow is a mini version of what companies call digital transformation these days.
It’s not about introducing a massive system, but starting with “turning the data you already have into a repeatable, processable form.”
5) Three Things the Talk Kept Emphasizing: “How to Make AI Behave as Intended Every Time”
1) To set clear success criteria, ask AI back what needs to be clarified
If your desired outcome is vague, AI will give a vague answer.
2) Break the path to the deliverable into a “thinking process”
Don’t ask for everything at once; go step by step.
Constraints like “only proceed with one step at a time” actually improve quality.
3) Give clear and unambiguous instructions
The key is to separate instructions/context/output format and leave them as reusable sentences.
These three are also connected to the most important keyword in AI trends: the AI agent direction.
For an agent to run well, “success conditions” and “steps” must be clear.
6) The “Truly Important Core Point” That Other YouTube/News Sources Rarely Mention (Reorganized from My Perspective)
(1) The starting point of automation isn’t “tool selection,” but assetizing “repeatable sentences (prompts)”
Most people start with “Which is better—ChatGPT/Cursor/Make/Zapier?” but the core point of this talk was the opposite.
If you accumulate instruction templates usable in your work in a spreadsheet/document, you can change tools later.
This is the biggest long-term cost-saving (=ROI) core point.
(2) For beginners, “role exploration” boosts productivity far more than “role assignment”
The biggest place where non-developers/non-analysts get stuck is “who should I think like?”
Rather than fixing on “data analyst” from the start, switching lenses—psychology/sociology/organization/marketing—depending on the data’s nature is much stronger.
(3) Success or failure of AI adoption is not “accuracy,” but “reproducibility rate”
Getting a great result once doesn’t mean much.
To become work automation, it must run with the same quality next week and next quarter.
This perspective is also something companies become more obsessed with as macroeconomic uncertainty grows (interest-rate swings, economic slowdown).
The more uncertain things are, the more “process,” not “luck,” becomes competitiveness.
7) The AI Trend Direction Suggested by Part 1/Part 2 of This Conference (Work Automation Perspective)
[Part 1 (Challenge Edition) Feel]
– Methods for orchestrating existing AI services to make them work in an agentic way
– An introduction to vibe coding that non-developers can follow
[Part 2 (Expansion Edition) Feel]
– Crawling to regularly fetch news/data and auto-report
– Building pipelines with more full-fledged automation tools (e.g., Cursor, Make, etc.)
In summary, it’s a leveling roadmap from “conversational AI → spreadsheet automation → workflow/agentization.”
< Summary >
The key takeaway of this pilot was not coding, but “reproducible instructions” that make AI move as intended every time.
Data analysis is a tool for understanding situations and making decisions, and you must define success criteria first for AI to become truly smart.
The flow of cleaning unstructured data like comments/reviews with ChatGPT and then making sentiment/needs/keyword extraction repeatable using Google Sheets AI functions was practical.
In particular, the approach of doing “role exploration” before “role assignment” is a hidden core point that greatly boosts non-developer productivity.
[Related Posts…]
- A Summary of Work Automation Trends Changed by AI Agents
- Hands-On Data Automation Starting with Google Sheets
*Source: [ 티타임즈TV ]
– [비개발자 직장인을 위한 바이브코딩 컨퍼런스] 파일럿 맛보기 (임동준 우아한형제들 개발 코치)


