Skip to main content

Editframe

2026


Mapping the code-as-video landscape --- deterministic to generative, with a learning-content lens

The first two posts in this series put Editframe through its paces on a hello-world clip and a 71-second product walkthrough. This third post zooms out. The question is no longer ‘does code-as-video work for L&D’ — the first two experiments answered that. It is which other frameworks belong in the picture, where they sit on a deterministic-to-generative spectrum, and what each tier can credibly contribute to a learning content development workflow.

Code-as-video with Editframe --- video production as a dev workflow

The first experiment closed with a question : can an agent take structured product information and turn it into a short, repeatable video composition? This second experiment is the answer. The interesting part is not the seconds. It is what this workflow does to the way Learning Program Owners, Learning Designers, and Learning Developers traditionally split the work.

Hello world with Editframe --- video production as a dev workflow

Can video production start to look more like a dev workflow, where the composition is code, the output is reproducible, and the agent can help move through the rough edges? Experiment 01 – a hello-world Editframe project, scaffolded, iterated, and rendered locally, with the failure points captured for the next pass.