Skip to main content

Bil Arikan

Learning, Patterns, Performance

I’m into learning, technology, and getting design, experiences, systems, patterns, people, data, and technology to work together. I’m a husband, dad, and tinkerer, who’s Toronto-based, and early morning overcaffeinated.

Recent

Claude for Small Business --- two edges of the same toggle

Claude for Small Business is a one-click plugin that puts Claude inside the tools small business owners already use. In this post I want to look at two implications for the SMB software market : why your product almost has to be a first-class citizen inside tools like Claude to stay in the candidate set, and why being in that candidate set also puts you a step closer to being benchmarked, ranked, and eventually replaced by the platform itself. I also walk through the plugin-and-connector architecture behind the payroll workflow Anthropic uses in the launch video.

Mapping the code-as-video landscape --- deterministic to generative, with a learning-content lens

The first two posts in this series put Editframe through its paces on a hello-world clip and a 71-second product walkthrough. This third post zooms out. The question is no longer ‘does code-as-video work for L&D’ — the first two experiments answered that. It is which other frameworks belong in the picture, where they sit on a deterministic-to-generative spectrum, and what each tier can credibly contribute to a learning content development workflow.

Code-as-video with Editframe --- video production as a dev workflow

The first experiment closed with a question : can an agent take structured product information and turn it into a short, repeatable video composition? This second experiment is the answer. The interesting part is not the seconds. It is what this workflow does to the way Learning Program Owners, Learning Designers, and Learning Developers traditionally split the work.

Hello world with Editframe --- video production as a dev workflow

Can video production start to look more like a dev workflow, where the composition is code, the output is reproducible, and the agent can help move through the rough edges? Experiment 01 – a hello-world Editframe project, scaffolded, iterated, and rendered locally, with the failure points captured for the next pass.

From blocked to built : decomposing a stuck project with human + AI collaboration

A content adaptation project had been stalled for months. Existing material existed, but no team had a clear mandate to adapt it, the source content’s training modality was wrong for the target audience, and the resourcing case was too thin to justify a traditional approach. The guiding question I was working with : can a single practitioner move a stuck, multi-stakeholder project to a validated proposal using AI-assisted work decomposition — and if so, what does that actually look like?

In-app Live Assistant : Part 2 --- Building the Screen-Sharing Version

The ADK Dev UI supports camera but not screen sharing. In this post I build a custom client that swaps getUserMedia for getDisplayMedia, writes a 16kHz AudioWorklet from scratch, and sends 1 FPS JPEG screen snapshots to the same WebSocket endpoint. The agent now sees the application instead of the user’s face.