Building Learning Content for Humans and Agents
Table of Contents
I came back to Andrej Karpathy’s YC talk with a learning-design lens, and one question stayed with me:
Are we building learning content for the way work used to happen, or for the way work is starting to happen now?
Goal #
In this post, I want to map a practical shift for L&D teams:
Move from course packages as final outputs to open knowledge assets that can support both humans and agents.
Working assumption #
My assumption is that we are entering a partial-autonomy model. Humans do judgment-heavy work. Agents help with retrieval, summarization, and workflow support.
In this case, learning content has two audiences:
- Humans who need context, clarity, and practice.
- Agents that need accessible structure.
Many current learning stacks still optimize mostly for audience #1.
Current architecture gap #
Tools like Storyline, Camtasia, or Vyond can produce strong visual learning experiences. The issue is not design quality. The issue is content architecture.
In many implementations, output is:
- A complex web app (heavy DOM + JavaScript state).
- Tool-specific, proprietary runtime logic.
- Wrapped inside SCORM for LMS tracking.
This works for launch-and-track, but it is weak for retrieval across systems.
If someone asks an assistant, “What are our five critical safety steps?”, the system should answer quickly and correctly. In many SCORM-heavy setups, that is hard.
Model I am testing #
My working model is course experience on top of open knowledge assets.
Markdown is useful here because it is:
- Easy for humans to read and edit.
- Easy for machines to parse.
- Portable across multiple outputs.
Directional comparison #
These values are directional, but this is the pattern I keep seeing in practice.
Practical pilot (90 days) #
I do not think teams need a full rebuild to start. A narrow pilot is enough.
- Pick one high-value learning journey (for example onboarding or safety).
- Extract core procedures into structured Markdown docs.
- Keep existing course modules, but connect them to the open source content.
- Add retrieval so assistants can answer from those same docs.
- Track time-to-answer and on-the-job support usage.
This creates a low-risk bridge from legacy delivery to future-ready learning infrastructure.
How I am building capability through this #
This is also how I am developing my own capabilities:
- Markdown-first knowledge design for reusable content operations.
- Information architecture for retrieval quality.
- Pair-programming with AI agents to prototype faster.
- Stronger judgment loops: generate with AI, decide with human responsibility.
In short: architect openly, automate selectively, and keep accountability human.
Next step #
For my next iteration, I want to run this model on one concrete workflow end-to-end, then compare outcomes against a traditional course-only flow.
The core idea remains: the bigger opportunity is not only AI features in tools. It is the knowledge architecture underneath those tools.
Related talks:
- Andrej Karpathy at Y Combinator: Software Is Changing (Again)
- Department of Product commentary: Andrej Karpathy’s latest talk