On July 3, 2025, in one of the classrooms during the Korean Medical Association Medical Policy Advanced Course.
As soon as I saw the title, “Medical Use of Generative AI,” I thought I absolutely had to take this lecture.
My interest in AI had already been deep for a long time, and I especially wanted to hear directly how it could be connected to clinical practice.
Soon after the lecture by Professor Oh Ji-sun of Asan Medical Center’s Department of Rheumatology began, my expectations shifted from simple curiosity to certainty.
AI was no longer a story about some distant future.
![2025 Korean Medical Association Medical Policy Advanced Course Lecture [Lecture on the Medical Use of Generative AI] image 1](https://pub-9f2bb3498faf4d1d8714b41df24753e3.r2.dev/content/clinics/archive/365f046879/naver_blog/newhair_blog/assets/by_hash/e9ee445f99c7ff90cb7ffc67599cd0814e1979ddb6ef4ffa41b05c9d9154abfa.png)
The lecture began with the roots of artificial intelligence.
Artificial neural networks that mimic the human brain, and deep learning built by stacking even deeper layers on top of that.
Based on that deep learning technology, generative AI, which can now even create content, has appeared, and
what we are dealing with now is not just technology, but a palpable presence called “intelligence.”
At first, AI found fixed correct answers, but now we are in an era where it can learn and judge for itself even when there is no correct answer.
Reinforcement learning, where it learns by receiving rewards like in a game, and language models with hundreds of billions of parameters, like GPT models.
She carefully explained how this model understands and summarizes the countless sentences we encounter every day in the clinic.
What was especially impressive were the concrete examples showing that AI is already being used in medical settings.
-
AI that reads pneumonia,
-
AI that detects diabetic retinopathy early,
-
AI that predicts the possibility of cardiac arrest in the intensive care unit...
And what was most interesting was the potential of a “generative chatbot” that complements communication between patients and doctors.
A patient who only thinks of questions after the appointment is over.
An era in which people trust internet blogs more than doctors’ words.
In all of these situations, generative AI was said to be able to serve as a “bridge” that understands the patient’s language, answers questions, and eases worries.
A study was also introduced showing that clinical notes summarized by GPT models were more complete and accurate than those written by human doctors.
Of course, the result was a little bittersweet.
![2025 Korean Medical Association Medical Policy Advanced Course Lecture [Lecture on the Medical Use of Generative AI] image 2](https://pub-9f2bb3498faf4d1d8714b41df24753e3.r2.dev/content/clinics/archive/365f046879/naver_blog/newhair_blog/assets/by_hash/26b992b03a1a96f0ee9b9f7660f971f54aa60785cbf3892ad4c67d2f9f18cff8.png)
A snapshot from class with my medical school classmate, Dr. Lee Geun-wook
However, it was not all optimistic.
The limitations and risks of AI were also clearly addressed.
-
The issue of “bias” that arises when learning from data,
-
The “black box” problem in which the basis for decisions cannot be explained,
-
And the ethical question of “who is responsible” when a decision goes wrong.
For example, if data on a particular race or rare disease is insufficient, judgments about it are inevitably distorted as well.
This was not just a technical issue, but a matter of fairness and justice in medicine.
More than anything, the most memorable line was this:
“AI is not replacing doctors, but a partner that will provide care alongside them.”
Hearing that suddenly brought the clinic scene to mind.
A busy outpatient clinic, records piling up, patients who leave because they are pressed for time before they can ask their questions.
Between all those scenes, I wondered whether AI could quietly write notes, read a patient’s mind, and lighten a doctor’s burden just a little.
Of course, that would only be true if it were used well.
Just as a knife can be medicine or a weapon depending on who holds it.
While listening to the lecture, I had this thought.
Perhaps the skill doctors need now is not simply medical knowledge.
How to collaborate with AI.
The intelligence to use that technology as a tool.
And humanity that technology cannot overstep.
Isn’t the doctor of the future the person who can blend those three well?