● AI Ontology Power Play
Samsung·Hyundai Motor’s U.S. factory competitiveness—“AI+Ontology” is what makes it… why you can’t stop the inevitable future
Key takeaway first: the 5 things you must get right in this video (interview)
1) It’s not that “introducing ontology” is the answer; the order of problem definition→data acquisition→ontology/LLM application determines success or failure.
2) The reason U.S. manufacturing plants have twice the productivity compared to Korea isn’t just simple automation, but rather lies in the structure where AI “attaches” to the process.
3) If you use only LLMs without ontology, you inevitably run into hallucinations, consistency issues, and limits in handling numerical/time-series data.
4) Korea already has a “strong foundation” in that public data has been opened in RDF (ontology language), and when the agentic AI era arrives, its true value will explode.
5) Instead of “big design,” you must validate and expand starting with small problems like a 6-month pilot.
In this article, I’ll summarize the above five items like a news report, and at the end, I’ll also pull out separately the “most important conclusion that you rarely hear discussed elsewhere.”
[News Briefing] The real backdrop of “2x productivity in U.S. factories”: automation+AI+ontology
In a TitiTimes interview, Saltlux CEO Lee Kyung-il gave examples of manufacturing plants Samsung, Hyundai Motor, and SK hynix are building in the U.S., emphasizing that what creates the productivity gap isn’t just robot/equipment automation, but a “structure where AI sticks to automation.”
He said that once physical automation such as human-like robots is added, and finally ontology is layered on top, the difference in competitiveness can only grow further.
- Automation: improves the speed and consistency of repetitive work
- AI: connects process data and decision-making to optimize
- Ontology: fixes “what is what/what relationships exist” as a knowledge structure to strengthen consistency
In other words, it’s an interpretation that the playing field shifts toward the combination of “manufacturing knowledge+data+reasoning,” rather than equipment alone.
Official “how ontology adoption fails”: “If you introduce it with no problems at all, 100% you’ll fail”
The part the representative said most strongly is this.
“If you introduce ontology, it will solve things automatically”—the moment you expect that, you fail 100%.
Because ontology isn’t just a simple tool; it’s work that defines the problem, then prepares the data needed to solve that problem, and finally turns it into a knowledge structure.
- Step 1: clearly define the problem to solve (e.g., a 3% yield increase)
- Step 2: check whether you have the core data connected to that problem
- Step 3: if you don’t have the data, first devise a data acquisition strategy
- Step 4: only then move to ontology, and combine it with LLM/reasoning models
The representative clearly said it wasn’t an approach like “if we build ontology/LLM/platform, it will solve our problems,” but rather that problem-data-knowledge formalization comes first.
The reality of companies “building AI without ontology”: the limits break at “knowledge consistency”
In the interview, the question came up about many cases of building AI even without ontology, and the representative answered in a way that revealed ontology’s role by explaining the key components that make up AI.
He said there’s a misunderstanding in viewing AI simply as “machine learning,” and explained that AI roughly includes the following elements together.
- Knowledge representation
- Learning
- Inference
- Planning/decision
So the discussion led to the point that even though many current generative AIs (LLMs) are transformer-based, in real-world task deployment, limitations grow when there’s no knowledge structure.
Especially as a failure point, he cited cases where “while studying ontology, you spend energy designing a too-beautiful, perfect knowledge system, which reduces practicality.”
The lesson from CYC failure cases: theoretical excess = collapse of practicality
As a representative failure case, the representative mentioned CYC (an ontology project supported by the U.S. Department of Defense that started 40 years ago).
The core point was this.
- Going too deep into the philosophical/theoretical depth of ontology
- Trying to build a “perfect ontology” only increases complexity
- Losing practicality for real business use and ending in failure
So the conclusion is that it’s not that “ontology is the answer,” but rather the viewpoint that you should design only as much as needed to solve the problem.
Alternative to combining ontology with LLM: RAG/graph RAG/”Knowledge Grounding” in the broad sense
Here, the terminology gets a bit complicated, but the message the representative summarized in one line is easy.
“RAG, which provides evidence when the LLM generates answers, is necessary, but if you rely only on sentence-level prompts, there are limits.”
So the concept that comes out is “ontology-based grounding.”
- RAG: provide evidence by putting documents (search results) into the prompt together
- Its limitation: because context is put in only as “sentences,” consistency can be shaken
- Graph RAG: provides more accurate connections based on ontology/graph structure
- Knowledge Grounding: finds or generates the knowledge structure that the LLM should refer to, based on a knowledge base, and provides it together
Using this structure, the representative said that hallucinations (lies) decrease, and especially that it handles time-series/numeric data—where LLMs were weaker—better.
How do you do “ontology maintenance”? Not static snapshots, but “dynamic updates”
The question naturally led to “Then shouldn’t ontology be continuously managed?”
The representative emphasized a system that keeps transforming continuously generated data flows into ontology, instead of making a snapshot at a specific past point and finishing there.
- Concepts like data fabric / data mashup
- Detecting conflicts/inconsistencies
- Updating with new information
- Real-timeization by tying everything into one platform
In other words, it’s not “build it once and finish,” but a structure where knowledge is updated in real time.
Where ontology costs are high: “schema design” and “concept frameworks”
Ontology is broadly divided into “concept structure (schema)” and “instances (actual data).”
- Concept framework (schema): the “meaning structure” such as who a person is and what the relationships between things are
- Instances: actual entity data such as “who Lee Kyung-il is, when he was born, and what he does”
According to the representative, concept frameworks still require a lot of human work, making costs/time high, while instances have relatively more automation potential.
So in areas like manufacturing, without standard ontologies, you must set up concepts from the beginning—making it even more burdensome.
Role the country should play: packaging “standard manufacturing processes + a basic ontology”
Here, the representative also mentioned the country’s role.
He believed it’s possible to prepare basic ontologies (schemas) for each manufacturing domain (chemicals/cars/parts/semiconductors, etc.), distribute them to companies, and let companies expand them according to their own situations.
He also said that beyond simply “creating an ontology,” what’s more important is that you need an initial consulting/support structure to help companies define the problems they face and prepare the data.
- Provide consulting support so companies facing problems can get started from the very beginning
- Ontology applied to the range where standard processes are possible (e.g., 10–20 representative manufacturing industry cases)
- Standardization/expansion linked to national standardization activities
In other words, it’s an approach where, rather than “perfect centralization,” the country lays down the foundation (standard schema + data structure) so companies can start quickly.
How is Korea assessed? “U.S. 70 points, Korea 45–50 points”—it could flip after time passes
The representative expressed the global comparison as scores.
- United States: around 70 points overall
- Korea: around 45–50 points
However, he thought Korea—a manufacturing powerhouse—may have room to overturn this over time by field, and as the basis, he said: “It’s the #1 in the U.S. not only in ontology, but also in generative AI (LRM, etc.), but the challenge remains: converting Korea’s sources (manufacturing data/tacit knowledge) into AI.”
The phrasing the representative emphasized here was quite striking.
“What the U.S. is ahead on isn’t only the technology itself, but also the speed/structure with which factories and data combine.”
And he identified Korea’s core task as building an environment that converts tacit knowledge (unstructured knowledge) held by factory managers and skilled workers into AI.
One piece of advice to persuade executives: ontology may not be the “answer”… but you must prepare for data-to-intelligence
What the representative told executives who were hesitating about ontology investment was this.
“Ontology might not be the answer. But if the start of moving toward autonomy/intelligence is data, then it’s necessary to make AI use the data better.”
In other words, he wasn’t arguing that you must do ontology no matter what; rather, he said that to let future AI (including generative AI) do more valuable work, you need to prepare the data so it can be turned into knowledge/structured.
Execution strategy: start with a 6-month pilot (proof-of-concept) + “experiment-learn” with small problems like a map
“Where to start” is the biggest challenge, isn’t it?
The representative said not to rush, and especially if you validate with small units—like a pilot/proof-of-concept—for about six months, confidence will grow.
- Define the “smallest problem” even if it’s a truly important one
- Run the pilot as a process of about 6 months
- Reflect failure/learning into the design of the next stage
And he expected that over the next 5 years, there will be “a lot of chaotic attempts,” but stressed that the ones that survive will likely be “a small number that found direction through experience.”
Saltlux’s perspective: LRM platform + ontology foundry (knowledge-based) orchestration
The representative explained two axes as the company’s differentiators.
- Lookxia: a platform that uses LRM/VLM/agents in a multi-agent manner
- Ontology foundry: an ontology-centered system
The core idea is that you use both platforms together, as if they were integrated side by side—like left and right arms.
Regarding questions comparing existing vendors, he summarized it as: “Palantir has ontology but no generative AI, while Gemini/Clod are strong in generative AI but relatively lack ontology.”
Differences seen through examples: a disaster-safety ontology project close to “zero hallucination”
The most specific example came from the disaster safety domain.
With the Ministry of the Interior and Safety, they ontologized numerical data to generate answers, and the representative claimed that there was no hallucination.
- Landslide occurrence based on weather data
- Connecting even the damage amount (numbers)
- Supporting specific decision-making based on ontology
- With a simple search-based approach, if there’s no document, you end up stitching things together by looking roughly—leading to false associations
So, in domains where accurate numbers/timestamps/relationships matter, the view is that ontology-based grounding can create performance differences.
5-year outlook: in the agentic AI era, “RDF public data” will shine in earnest
In the interview, a question was asked: “Why hasn’t Korea connected across ministries yet?” and the representative answered like this.
“Opening happened, but the AI level that could actually be used was lacking.”
Now, with the arrival of the agentic AI era, the true value will start to show in earnest.
What’s especially important here is that Korea already has many cases of public data being opened in RDF (ontology language) form.
- Open banking can also serve as a favorable foundation for ontologizing
- In the public data strategy, RDF-centered opening is progressing
- As AI starts working on a knowledge base, the value of connecting data will explode
This section emphasizes “the timing when technology matures,” so it’s a perspective that will be quite useful when building a roadmap going forward.
The “most important conclusion” you rarely hear elsewhere (separate summary)
The essence of adopting ontology is not “adding a knowledge tool,” but building a system where the organization carries out “problem definition and data preparation” to completion.
Because this is the core, the reason many companies fail is ultimately the same.
- They don’t pin down the problem (no numerical KPI)
- They can’t own/acquire the data (it’s unclear where it is and who uses it)
- Then they mistake ontology design for a “big design project”
- As a result, theory comes first over practicality, and costs are consumed
On the other hand, the direction that increases the probability of success is very simple.
Catch a small KPI problem (e.g., yield/hallucination/accuracy/processing time), first pin down the data structure that can create that KPI, and then combine ontology+LLM.
Here, you can understand ontology as a consistency engine that makes AI do its job properly, rather than as the “answer” itself.
Keywords to check from an SEO perspective (naturally reflected)
This topic goes beyond generative AI into expansion toward knowledge-based AI, and addresses issues of data structuring and connections.
- Ontology
- Agentic AI
- RAG (search-based generation)
- Data ontology
- Reducing hallucinations
Main points to convey
- Manufacturing competitiveness is determined not only by automation, but by the speed at which you attach knowledge/data consistency with AI+ontology.
- Ontology delivers results not through “adoption,” but by attaching step by step after defining the problem and acquiring data.
- To compensate for RAG’s limitations, knowledge-based approaches like graph/grounding become important.
- Korea has foundations such as RDF-based public data, so its potential for value increases in the agentic AI era is high.
- For executive persuasion and execution, using a 6-month pilot to solve a small problem and then expand is realistic.
< Summary >
Saltlux CEO Lee Kyung-il said that the reason U.S. manufacturing plants have higher productivity than Korea is found in the “structure where AI sticks to automation,” and that once ontology is combined, the gap could widen even more.
He emphasized that ontology adoption isn’t the answer on its own; to prevent failure, you must first define the problem to solve, secure the necessary data, and then combine ontology with an LLM.
He also explained that RAG-based approaches without ontology have limitations in hallucinations and numerical/time-series consistency, and that knowledge-based methods like graph RAG and knowledge grounding help make more accurate decisions.
Finally, he said that Korea has foundations such as public data RDF opening, and that its value will grow as the agentic AI era arrives; he advised executives to validate from small problems with 6-month pilots rather than through big design projects.
[Related posts…]
- Ontology construction roadmap and data strategy
- In the agentic AI era, a checklist for companies to prepare
*Source: [ 티타임즈TV ]
– “삼성, 현대차 미국공장 경쟁력이 한국 앞서는 건 막을 수 없는 미래” (이경일 솔트룩스 대표)


