[Medical AI Classroom: B10] From Context to Expression — How Transformers and Probabilities Enable AI to Write Naturally —

🟦 Introduction: From “Reading” to “Writing” — How AI Creates Language

In the previous sessions, we learned that AI has already acquired two essential capabilities:

  • The ability to convert words into meaningful numerical representations (vectors) — (Vol.8)
  • The ability to understand which parts of a sentence to focus on — (Vol.9)

These two powers form the foundation of AI’s ability to read and interpret language.

But modern generative AI, such as ChatGPT, doesn’t stop at just reading.
It also has the ability to write — answering questions, summarizing clinical notes, and generating explanations in natural language.

For example, when you ask:

“Please explain these symptoms”
“Generate a patient-friendly explanation”

It often produces responses that sound like they were written by a physician or a medical writer — contextual, accurate, and easy to understand.

So where does this writing ability come from?

Surprisingly, the mechanism behind it is fairly simple and elegant:

AI predicts the next word based on the flow of previous words — using probabilities.

In this session, we’ll explore how AI selects the next word, and how it weaves entire sentences, one word at a time.
We’ll break down the mechanics of prediction, probability, and context—
using examples from clinical practice to make it intuitive and accessible.

目次

🟦 Chapter 1: AI Predicts “What Word Is Likely to Come Next”

When AI writes a sentence, it isn’t following some rigid set of rules or pre-written templates.
In fact, what it’s doing is surprisingly simple:

It looks at the words that have already been written,
and predicts what word is most likely to come next.

That’s it—repeating this prediction process, one word at a time.


🔹 A Medical Example

Let’s say the AI is generating the beginning of a medical sentence:

The patient…

At this point, the sentence is incomplete.
Now the AI considers a list of possible next words, such as:

  • was
  • is
  • has
  • presented
  • came

It then chooses the word that most naturally continues the sentence.
Since medical records often start with phrases like “The patient was…”, the AI learns from its training data that “was” is highly likely to follow.


🔹 Prediction Is Based on Prior Word Flow

In this way, AI selects each word based on the flow of words that came before.
It uses statistical models to calculate the probability of each possible next word, and picks the most natural choice.

This process is surprisingly similar to how humans speak or write:

We don’t plan every sentence from start to finish.
Instead, we think step by step, choosing words that make sense in the current context.


🔹 Applications in Healthcare

This mechanism is highly applicable in medical contexts.

For example:

  • When given the start “The patient was…”, the AI might generate: “diagnosed with pneumonia.”
  • For the phrase “The CT scan showed…”, it might continue with: “a right lower lobe consolidation.”

By selecting one word at a time, based on the context,
AI is able to “weave” sentences that match clinical documentation patterns or natural medical explanations.


This fundamental process—predicting the next word—is at the heart of how AI generates sentences.

In the next chapter, we’ll dive deeper into how AI makes these predictions using probabilities.
How does it calculate which words are more likely to follow? Let’s take a closer look.

🟦 Chapter 2: What Does It Mean to Choose Words Based on Probability?

In the previous chapter, we learned that AI generates sentences by predicting what word is likely to come next based on the preceding words.

But how does it actually make that prediction?

The key lies in a simple but powerful concept: probability.


🔹 AI Learns Word Patterns from Massive Text Data

AI is trained on millions or even billions of sentences, learning from the way humans naturally use language.

Through this training, it learns the statistical patterns of word usage—for example:

After this word, what words tend to appear, and how often?

Let’s consider a familiar example:

The patient was…

In the AI’s internal model, this might trigger a list of potential continuations, each with a probability score, such as:

WordProbabilityNotes
diagnosed45%Very common continuation
admitted20%Common, but context-dependent
treated10%Valid, but often follows more detail
walking5%Grammatically possible but odd
banana0.1%Almost never used in a medical context

The AI selects the word with the highest likelihood, producing the most natural continuation.


🔹 Probabilities Reflect Human Usage Patterns

These probabilities aren’t arbitrary—they’re based on what people actually write.

For instance:

  • In medical records, “The patient was diagnosed…” appears very frequently
  • The word “banana” almost never appears in this context

AI learns these trends and assigns very low probabilities to out-of-place words.


🔹 Medical Applications: Balancing Precision and Flexibility

This probability-based word selection is what makes AI useful in various clinical applications.

Examples:

  • Drafting clinical notes
    → When a doctor inputs symptoms or test results, the AI can suggest natural, contextually appropriate phrasing
  • Supporting varied expressions
    → For the same symptom, it might output either:
    “The patient complained of coughing” or
    “Coughing was the chief complaint”—depending on style and context
  • Avoiding ambiguous or awkward language
    → Instead of vague statements like “The patient is positive”, it might generate:
    “The test was positive for influenza.”

By relying on probabilities, AI doesn’t just choose words—it selects the most natural and precise words for the given context.


In the next chapter, we’ll explore how AI generates entire sentences by repeating this prediction process word by word—and how it “grows” a sentence step by step.

🟦 Chapter 3: AI Builds Sentences One Word at a Time

In the previous chapters, we learned that AI selects the next word based on probabilities derived from previous words.
Importantly, this prediction doesn’t happen all at once.

Instead, AI generates sentences step by step, choosing one word at a time—
almost like growing a sentence, layer by layer.


🔹 A Sentence Is a Chain of Probability-Based Choices

Let’s say the AI is generating a medical sentence starting with:

The

At this point, the AI considers likely continuations such as:

  • patient
  • doctor
  • test
  • hospital

If “The patient” is determined to be the most natural word order in the context, the model will select it.

Now it has:

The patient

What comes next?

  • was
  • has
  • is
  • presented

Considering the medical context, “was” may have the highest likelihood.
And so the sentence grows:

The patient was

Then:

The patient was diagnosed
The patient was diagnosed with
The patient was diagnosed with pneumonia.

This is how AI builds a full sentence—one word at a time.


🔹 Humans Write the Same Way—Step by Step

When we speak or write, we’re also constantly asking ourselves:

“What should I say next?”

Especially in clinical interviews or note-taking, we actively decide what to include—
a chief complaint? a symptom course? a test result?

AI does something similar.
The key difference is that AI makes those decisions numerically, based on probability.


🔹 Practical Applications in Healthcare

This “word-by-word” generation process enables useful features in clinical practice:

  • Auto-completion of medical records
    → After typing “The patient was…”, AI might suggest completions like “diagnosed with…” or “admitted for…”
  • Generating follow-up text in interview notes
    → After “Patient reported chest pain,” AI might suggest related symptoms or clinical progressions.
  • Drafting patient-friendly explanations
    → Given the prompt “This medication is used to treat…”, AI might suggest natural continuations like “high blood pressure” or “type 2 diabetes.”

By selecting each word probabilistically and in context, AI is able to generate fluid, relevant, and non-mechanical language.


In the next chapter, we’ll look at how this probabilistic word selection contributes to natural variation and flexibility in AI-generated text.

🟦 Chapter 4: Probability Enables Flexibility and Natural Expression

When AI generates text, it doesn’t always produce the exact same output every time.
You might notice that even with the same prompt, the wording or sentence structure can vary slightly.

This variation—what might seem like a kind of “randomness”—
actually stems from the fact that AI chooses words based on probability, not rigid rules.


🔹 Always Choosing the Highest-Probability Word Makes Text Repetitive

If the AI were programmed to always pick the most likely next word, it would respond the same way every time—regardless of context.

For example, if every sentence starting with:

The patient was…

always continued with:

diagnosed…

Then no matter the situation, it would produce identical responses—
sounding robotic and lacking the richness of human language.


🔹 Variation Arises from “Soft” Probability-Based Choice

Instead, the AI assigns probabilities to each potential next word—
and doesn’t always pick the top one.

Sometimes, it selects a second- or third-ranked option,
introducing variety while still staying within the bounds of natural, context-appropriate language.

For example:

  • One time: “The patient was diagnosed…”
  • Another time: “The patient was admitted…”
  • Another: “The patient was treated…”

This flexibility is possible because the AI samples from a distribution, not a fixed list.


🔹 Medical Communication Benefits from This Flexibility

Such variation is especially useful in healthcare, where tone, precision, and clarity matter.

Examples:

  • Diversifying how things are explained
    → Instead of always saying “This is an antihypertensive medication,” the AI might say:
    “This medication helps lower your blood pressure.”
    Adjusting based on the patient’s understanding.
  • Adjusting diagnostic nuance
    “Diagnosed with pneumonia” vs. “Pneumonia was suspected”—depending on the level of clinical certainty.
  • Avoiding rigid templates
    → Rather than sounding like a form letter, the AI produces more human-like, situationally appropriate language.

🔹 AI Actively Adjusts Probabilities Based on Context

Of course, the AI doesn’t choose words at random.

It carefully reads the preceding context and adjusts its internal probabilities accordingly.
Then, from that adjusted distribution, it selects a word—sometimes the top choice, sometimes a close alternative.

What looks like “creative variation” is actually powered by mathematical probability.


In the next chapter, we’ll look behind the scenes at what enables AI to understand context so well—
and how the Transformer architecture, introduced in Vol.9, supports this sentence-by-sentence generation.

🟦 Chapter 5: What the Transformer Supports Is Contextual Understanding

In the previous chapters, we learned that AI generates sentences by choosing the next word based on probability—and that this approach enables both natural and flexible language.

But AI doesn’t just look at the most recent word when choosing what comes next.
It considers the entire prior context to decide what word fits best.

The architecture that makes this possible is the Transformer, which we explored in Vol.9.


🔹 What Is “Context”?

When humans read or write, we don’t just think about the literal meaning of each word.
We also consider how it fits into the overall flow of the sentence or paragraph.

Take the following two sentences:

  • The patient was diagnosed with pneumonia.
  • The patient was admitted to the hospital.

Both begin with “The patient was”, but the words that follow are very different:

  • diagnosed with → followed by a diagnosis
  • admitted to → followed by a place

In both cases, choosing the correct next word requires understanding the entire structure and direction of the sentence so far.


🔹 The Transformer Sees the Whole Sentence

Older models like RNNs (Recurrent Neural Networks) processed sentences one word at a time, in sequence.
This made it difficult to retain earlier information, especially in longer sentences.

The Transformer, however, uses a mechanism called Self-Attention to look at all the words in a sentence at once.

It can see:

  • How “diagnosed” connects to “patient” and “pneumonia”
  • How “admitted” naturally leads to “hospital”

Because it understands every word’s relationship to every other,
the Transformer can generate text that respects context and meaning.


🔹 How the Transformer Powers Language Generation

In clinical documentation, many types of information are interconnected:

  • “Fever,” “cough,” “shortness of breath” → suggest “pneumonia”
  • “Diabetes history” + “foot ulcer” → imply “infection risk”
  • “Lab value trends” + “yesterday’s vitals” → influence “current status”

Transformer-based models can simultaneously consider all these elements, creating explanations or summaries that faithfully reflect clinical reasoning.


🔹 Maintaining Logical Flow While Generating Text

Because of the Transformer, AI doesn’t just string words together mechanically.

It can maintain coherence and consistency, even in longer text.

That’s why it can generate:

  • Consistent and accurate summaries of medical records
  • Grammatically correct and meaningful sentences
  • Explanations using appropriate medical terminology

This structural understanding is what makes Transformer-based language models so powerful in healthcare.


In the next chapter, we’ll see how combining three core capabilities—word meaning, attention, and probabilistic prediction—helps AI generate text that truly feels human-like.

🟦 Chapter 6: Why Does AI-Generated Text Feel “Human-Like”?

Have you ever read a response from a generative AI like ChatGPT and thought:

“This sounds like something a real person would write.”

Even when the content is technical—such as a medical explanation—the wording often feels smooth, appropriate, and natural.

This human-likeness is not accidental.
It’s the result of combining three core abilities that work together behind the scenes.


🔹 The Three Powers Behind Human-Like Text

AI can produce natural, context-sensitive language because it brings together:

  1. Word Embeddings (Word Vectors)
    → Words are represented as numerical vectors, which capture their meaning and relationships.
    For example, “fever” and “infection” are placed close together in vector space.
  2. Attention
    → The model calculates which words in a sentence it should focus on.
    For instance, to understand “diagnosed”, it pays attention to “patient” and “pneumonia”.
  3. Probabilistic Word Selection
    → It predicts and selects the next word based on the surrounding context, one word at a time.
    Like: “The patient was…”“diagnosed”, “admitted”, etc.

Together, these capabilities enable AI to generate not just grammatically correct, but semantically meaningful and stylistically appropriate sentences.


🔹 The Intelligence Behind Medical-Grade Language

This framework allows AI to handle:

  • Not just word meaning, but also
  • Relationships between terms
  • Medical phrasing and variation

For example:

  • When it sees “fever”, it can decide whether it connects to “common cold” or “infection” based on surrounding context.
  • When it sees “positive”, it can interpret whether it means a test result or an emotional tone—depending on how it’s used.

This flexible understanding of meaning is built on the synergy of all three mechanisms.


🔹 Like a Human, Choosing Words Step by Step

AI constructs sentences in a way that resembles human language production:

“What’s the next natural thing to say here?”

This step-by-step generation mirrors how people speak or write,
but the AI does it using mathematical probability and contextual calculation.


In the next and final chapter, we’ll summarize everything we’ve learned:
How AI “weaves” language together—one word at a time, using probability, context, and meaning.

🟦 Chapter 7: How AI “Weaves” Language — A Summary

As we’ve seen throughout this session, AI generates text using a simple yet powerful process:
It repeats one core task again and again:

Choose the next word based on the previous context and probabilities.

In doing so, AI creates sentences much like a person spins thread—
one strand at a time, gradually forming a coherent whole.


🔹 Step-by-Step: How AI Builds a Sentence

Let’s review how this sentence construction works in practice:

  1. Generates one word at a time
    → AI doesn’t output an entire sentence at once.
    It builds it incrementally:
    “The” → “The patient” → “The patient was” → “The patient was diagnosed…”
  2. Considers the entire context
    → It doesn’t just look at the last word.
    AI takes into account all prior words to make the next choice as natural and relevant as possible.
  3. Uses the Transformer to understand relationships
    → Through Self-Attention, it analyzes all word relationships at once,
    ensuring the sentence maintains logical and semantic coherence.
  4. Chooses words based on probability, not strict rules
    → This enables AI to produce flexible and context-sensitive language,
    rather than robotic, repetitive phrasing.

Thanks to this structure, AI isn’t just a response machine.
It behaves like a true writer—capable of generating thoughtful, purpose-driven sentences.


🔹 Real-World Use in Clinical Practice

This word-by-word generation system underlies many emerging medical applications:

  • Drafting clinical records
    → AI assists physicians by suggesting phrases based on partial input.
  • Generating explanatory text
    → Producing patient-friendly explanations in natural language.
  • Summarizing interviews or patient histories
    → Creating structured summaries from free-form notes.

In all of these, AI needs to ask:

  • What comes next?
  • What wording makes sense here?
  • What would be easiest for a human to understand?

And it answers those questions by drawing on its core capabilities:
word meaning, contextual focus, and probabilistic prediction.


🔹 Final Thoughts: AI Weaves Words Using Meaning and Context

Across this three-part special on Generative AI and Language,
we’ve explored how AI reads and writes language with increasing sophistication.

In Vol.8, we saw how words are transformed into vectors—numerical representations of meaning.
This creates an internal map of language, where related terms are located close together.

In Vol.9, we learned about Attention—how AI determines which words to focus on within a sentence, unlocking contextual understanding.

In this session (Vol.10), we explored how AI uses that knowledge to generate sentences,
choosing each word one by one, based on context and probability.

AI doesn’t “feel” or “intend” like a human.
But by calculating meaning, tracking context, and selecting words probabilistically,
it can appear to understand and communicate like us.

This isn’t just a technical breakthrough—it’s the foundation for meaningful applications across healthcare, education, and society at large.

By understanding how AI works, we can confidently and responsibly integrate it into medical practice, not as a black box, but as a transparent and trustworthy tool.


This concludes our foundational series on how AI reads and writes language.

In the next phase of our “Learn by Building! Medical AI x Generative Models” series,
we’ll dive into hands-on practice, exploring actual code and the mathematical foundations of AI.

Understanding the formulas and algorithms behind these systems empowers us to move from being passive users to informed co-creators of AI-powered healthcare.

Let’s now take the next step—by building it together!

📚 References

  1. Sutskever I, Martens J, Hinton G.
    Generating Text with Recurrent Neural Networks.
    In: Proceedings of the 28th International Conference on Machine Learning. 2011;1017–24.
  2. Mikolov T, Karafiát M, Burget L, Cernockỳ J, Khudanpur S.
    Recurrent neural network based language model.
    In: Interspeech 2010. 2010;1045–8.
  3. Bengio Y, Ducharme R, Vincent P, Jauvin C.
    A neural probabilistic language model.
    J Mach Learn Res. 2003;3:1137–55.
  4. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al.
    Attention is all you need.
    In: Advances in Neural Information Processing Systems. 2017;30:5998–6008.
  5. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, et al.
    Language Models are Few-Shot Learners.
    In: Advances in Neural Information Processing Systems. 2020;33:1877–1901.
  6. Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I.
    Language Models are Unsupervised Multitask Learners.
    OpenAI; 2019. Available from: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
  7. Ghosh S, Chollet F.
    Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play.
    2nd ed. O’Reilly Media; 2021.

⚠️ Disclaimer

This content is based on information available at the time of writing.
Please note that updates to tools, libraries, or technologies may result in changes to the described content.

This material is intended for educational purposes only and should not be considered medical advice.
If applying these technologies in actual clinical settings, please ensure compliance with all relevant laws and guidelines (e.g., from the Ministry of Health, Labour and Welfare [MHLW], PMDA, METI, or relevant academic societies), and seek expert consultation as needed.

When using generative AI, particular caution must be taken regarding issues such as hallucinations (inaccurate outputs) and algorithmic bias.
A human expert should always review and validate any AI-generated outputs before clinical use.

This content includes portions drafted with the assistance of AI. While every effort has been made to ensure accuracy, any situation requiring professional judgment—such as in medicine, law, or education—should always be evaluated by a qualified specialist.


ご利用規約(免責事項)

当サイト(以下「本サイト」といいます)をご利用になる前に、本ご利用規約(以下「本規約」といいます)をよくお読みください。本サイトを利用された時点で、利用者は本規約の全ての条項に同意したものとみなします。

第1条(目的と情報の性質)

  1. 本サイトは、医療分野におけるAI技術に関する一般的な情報提供および技術的な学習機会の提供を唯一の目的とします。
  2. 本サイトで提供されるすべてのコンテンツ(文章、図表、コード、データセットの紹介等を含みますが、これらに限定されません)は、一般的な学習参考用であり、いかなる場合も医学的な助言、診断、治療、またはこれらに準ずる行為(以下「医行為等」といいます)を提供するものではありません。
  3. 本サイトのコンテンツは、特定の製品、技術、または治療法の有効性、安全性を保証、推奨、または広告・販売促進するものではありません。紹介する技術には研究開発段階のものが含まれており、その臨床応用には、さらなる研究と国内外の規制当局による正式な承認が別途必要です。
  4. 本サイトは、情報提供を目的としたものであり、特定の治療法を推奨するものではありません。健康に関するご懸念やご相談は、必ず専門の医療機関にご相談ください。

第2条(法令等の遵守)
利用者は、本サイトの利用にあたり、医師法、医薬品、医療機器等の品質、有効性及び安全性の確保等に関する法律(薬機法)、個人情報の保護に関する法律、医療法、医療広告ガイドライン、その他関連する国内外の全ての法令、条例、規則、および各省庁・学会等が定める最新のガイドライン等を、自らの責任において遵守するものとします。これらの適用判断についても、利用者が自ら関係各所に確認するものとし、本サイトは一切の責任を負いません。

第3条(医療行為における責任)

  1. 本サイトで紹介するAI技術・手法は、あくまで研究段階の技術的解説であり、実際の臨床現場での診断・治療を代替、補助、または推奨するものでは一切ありません。
  2. 医行為等に関する最終的な判断、決定、およびそれに伴う一切の責任は、必ず法律上その資格を認められた医療専門家(医師、歯科医師等)が負うものとします。AIによる出力を、資格を有する専門家による独立した検証および判断を経ずに利用することを固く禁じます。
  3. 本サイトの情報に基づくいかなる行為によって利用者または第三者に損害が生じた場合も、本サイト運営者は一切の責任を負いません。実際の臨床判断に際しては、必ず担当の医療専門家にご相談ください。本サイトの利用によって、利用者と本サイト運営者の間に、医師と患者の関係、またはその他いかなる専門的な関係も成立するものではありません。

第4条(情報の正確性・完全性・有用性)

  1. 本サイトは、掲載する情報(数値、事例、ソースコード、ライブラリのバージョン等)の正確性、完全性、網羅性、有用性、特定目的への適合性、その他一切の事項について、何ら保証するものではありません。
  2. 掲載情報は執筆時点のものであり、予告なく変更または削除されることがあります。また、技術の進展、ライブラリの更新等により、情報は古くなる可能性があります。利用者は、必ず自身で公式ドキュメント等の最新情報を確認し、自らの責任で情報を利用するものとします。

第5条(AI生成コンテンツに関する注意事項)
本サイトのコンテンツには、AIによる提案を基に作成された部分が含まれる場合がありますが、公開にあたっては人間による監修・編集を経ています。利用者が生成AI等を用いる際は、ハルシネーション(事実に基づかない情報の生成)やバイアスのリスクが内在することを十分に理解し、その出力を鵜呑みにすることなく、必ず専門家による検証を行うものとします。

第6条(知的財産権)

  1. 本サイトを構成するすべてのコンテンツに関する著作権、商標権、その他一切の知的財産権は、本サイト運営者または正当な権利を有する第三者に帰属します。
  2. 本サイトのコンテンツを引用、転載、複製、改変、その他の二次利用を行う場合は、著作権法その他関連法規を遵守し、必ず出典を明記するとともに、権利者の許諾を得るなど、適切な手続きを自らの責任で行うものとします。

第7条(プライバシー・倫理)
本サイトで紹介または言及されるデータセット等を利用する場合、利用者は当該データセットに付随するライセンス条件および研究倫理指針を厳格に遵守し、個人情報の匿名化や同意取得の確認など、適用される法規制に基づき必要とされるすべての措置を、自らの責任において講じるものとします。

第8条(利用環境)
本サイトで紹介するソースコードやライブラリは、執筆時点で特定のバージョンおよび実行環境(OS、ハードウェア、依存パッケージ等)を前提としています。利用者の環境における動作を保証するものではなく、互換性の問題等に起因するいかなる不利益・損害についても、本サイト運営者は責任を負いません。

第9条(免責事項)

  1. 本サイト運営者は、利用者が本サイトを利用したこと、または利用できなかったことによって生じる一切の損害(直接損害、間接損害、付随的損害、特別損害、懲罰的損害、逸失利益、データの消失、プログラムの毀損等を含みますが、これらに限定されません)について、その原因の如何を問わず、一切の法的責任を負わないものとします。
  2. 本サイトの利用は、学習および研究目的に限定されるものとし、それ以外の目的での利用はご遠慮ください。
  3. 本サイトの利用に関連して、利用者と第三者との間で紛争が生じた場合、利用者は自らの費用と責任においてこれを解決するものとし、本サイト運営者に一切の迷惑または損害を与えないものとします。
  4. 本サイト運営者は、いつでも予告なく本サイトの運営を中断、中止、または内容を変更できるものとし、これによって利用者に生じたいかなる損害についても責任を負いません。

第10条(規約の変更)
本サイト運営者は、必要と判断した場合、利用者の承諾を得ることなく、いつでも本規約を変更することができます。変更後の規約は、本サイト上に掲載された時点で効力を生じるものとし、利用者は変更後の規約に拘束されるものとします。

第11条(準拠法および合意管轄)
本規約の解釈にあたっては、日本法を準拠法とします。本サイトの利用および本規約に関連して生じる一切の紛争については、東京地方裁判所を第一審の専属的合意管轄裁判所とします。


For J³, may joy follow you.

よかったらシェアしてね!
  • URLをコピーしました!
  • URLをコピーしました!

この記事を書いた人

医師・医学博士・AI研究者・連続起業家
元厚生労働省幹部・ハーバード大学理学修士・ケンブリッジ大学MBA・コロンビア大学行政修士(経済)
岡山大学医学部卒業後、内科・地域医療に従事。厚生労働省で複数室長(医療情報・救急災害・国際展開等)を歴任し、内閣官房・内閣府・文部科学省でも医療政策に携わる。
退官後は、日本大手IT企業や英国VCで新規事業開発・投資を担当し、複数の医療スタートアップを創業。現在は医療AI・デジタル医療機器の開発に取り組むとともに、東京都港区で内科クリニックを開業。
複数大学で教授として教育・研究活動に従事し、医療関係者向け医療AIラボ「Medical AI Nexus」、医療メディア「The Health Choice | 健康の選択」を主宰。
ケンブリッジ大学Associate・社会医学系指導医・専門医・The Royal Society of Medicine Fellow

目次