<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai-Agents on Bil Arikan</title><link>https://bil.arikan.ca/tags/ai-agents/</link><description>Recent content in Ai-Agents on Bil Arikan</description><generator>Hugo</generator><language>en-US</language><lastBuildDate>Fri, 08 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://bil.arikan.ca/tags/ai-agents/index.xml" rel="self" type="application/rss+xml"/><item><title>Mapping the code-as-video landscape --- deterministic to generative, with a learning-content lens</title><link>https://bil.arikan.ca/posts/code-as-video-landscape-for-learning-content/</link><pubDate>Fri, 08 May 2026 00:00:00 +0000</pubDate><guid>https://bil.arikan.ca/posts/code-as-video-landscape-for-learning-content/</guid><description>The first two posts in this series put Editframe through its paces on a hello-world clip and a 71-second product walkthrough. This third post zooms out. The question is no longer &amp;lsquo;does code-as-video work for L&amp;amp;D&amp;rsquo; &amp;mdash; the first two experiments answered that. It is which other frameworks belong in the picture, where they sit on a deterministic-to-generative spectrum, and what each tier can credibly contribute to a learning content development workflow.</description></item><item><title>Code-as-video with Editframe --- video production as a dev workflow</title><link>https://bil.arikan.ca/posts/code-as-video-with-editframe/</link><pubDate>Thu, 07 May 2026 00:00:00 +0000</pubDate><guid>https://bil.arikan.ca/posts/code-as-video-with-editframe/</guid><description>The first experiment closed with a question : can an agent take structured product information and turn it into a short, repeatable video composition? This second experiment is the answer. The interesting part is not the seconds. It is what this workflow does to the way Learning Program Owners, Learning Designers, and Learning Developers traditionally split the work.</description></item><item><title>Hello world with Editframe --- video production as a dev workflow</title><link>https://bil.arikan.ca/posts/video-production-to-dev-workflow-editframe-test-drive/</link><pubDate>Tue, 05 May 2026 00:00:00 +0000</pubDate><guid>https://bil.arikan.ca/posts/video-production-to-dev-workflow-editframe-test-drive/</guid><description>Can video production start to look more like a dev workflow, where the composition is code, the output is reproducible, and the agent can help move through the rough edges? Experiment 01 &amp;ndash; a hello-world Editframe project, scaffolded, iterated, and rendered locally, with the failure points captured for the next pass.</description></item><item><title>In-app Live Assistant : Part 2 --- Building the Screen-Sharing Version</title><link>https://bil.arikan.ca/posts/in-app-live-assistant-part-2-building-the/</link><pubDate>Mon, 06 Apr 2026 00:00:00 -0500</pubDate><guid>https://bil.arikan.ca/posts/in-app-live-assistant-part-2-building-the/</guid><description>The ADK Dev UI supports camera but not screen sharing. In this post I build a custom client that swaps getUserMedia for getDisplayMedia, writes a 16kHz AudioWorklet from scratch, and sends 1 FPS JPEG screen snapshots to the same WebSocket endpoint. The agent now sees the application instead of the user&amp;rsquo;s face.</description></item><item><title>Translation vs. Localisation Is the Wrong Frame</title><link>https://bil.arikan.ca/posts/translations-localizations-decomposition/</link><pubDate>Thu, 02 Apr 2026 00:00:00 +0000</pubDate><guid>https://bil.arikan.ca/posts/translations-localizations-decomposition/</guid><description>&lt;p&gt;I&amp;rsquo;ve been working on an agent team approch to translation and localisation of training content, and I got some pointed feedback, as well as debates on definitions: where exactly does translation end and localisation begin?&lt;/p&gt;</description></item><item><title>Building a Localisation Agent in Microsoft Copilot Studio</title><link>https://bil.arikan.ca/posts/building-a-localisation-agent-in-copilot-studio/</link><pubDate>Wed, 18 Mar 2026 00:00:00 -0500</pubDate><guid>https://bil.arikan.ca/posts/building-a-localisation-agent-in-copilot-studio/</guid><description>I stopped researching which expensive translation tool to buy and built an orchestrated AI agent instead — inside Microsoft 365, using Copilot Studio, without leaving the tenant.</description></item><item><title>In-app Live Assistant : Part 1 --- Walking Through Google's ADK Bidirectional Streaming Demo</title><link>https://bil.arikan.ca/posts/in-app-live-assistant-part-1-google-adk-bidirectional-streaming-demo/</link><pubDate>Mon, 09 Mar 2026 00:00:00 -0500</pubDate><guid>https://bil.arikan.ca/posts/in-app-live-assistant-part-1-google-adk-bidirectional-streaming-demo/</guid><description>I want to build a live AI assistant that can hear a user and see their screen at the same time, then talk back in real time. This post walks through getting Google&amp;rsquo;s ADK bidirectional streaming demo running locally with mic, camera, and voice out &amp;mdash; and documents the things that tripped me up.</description></item><item><title>Google Is Quietly Building the Agentic Web</title><link>https://bil.arikan.ca/posts/google-quietly-building-agentic-web/</link><pubDate>Fri, 06 Mar 2026 00:00:00 -0500</pubDate><guid>https://bil.arikan.ca/posts/google-quietly-building-agentic-web/</guid><description>In the same week that OpenAI and Anthropic dominated AI headlines over a rushed military deal, Google was quietly shipping the protocols that will define how agents browse, buy, and collaborate on the web: WebMCP, the Universal Commerce Protocol, and Agent2Agent.</description></item><item><title>Building Learning Content for Humans and Agents</title><link>https://bil.arikan.ca/posts/future-ready-learning-content/</link><pubDate>Fri, 13 Feb 2026 00:00:00 -0500</pubDate><guid>https://bil.arikan.ca/posts/future-ready-learning-content/</guid><description>A working model for designing learning content that supports both human understanding and agent retrieval.</description></item><item><title>Experiment: A Live In-App Assistant (Voice + Screen)</title><link>https://bil.arikan.ca/posts/live-streaming-in-app-assistant/</link><pubDate>Fri, 13 Feb 2026 00:00:00 -0500</pubDate><guid>https://bil.arikan.ca/posts/live-streaming-in-app-assistant/</guid><description>I tried to recreate a Google AI Studio proof-of-concept in code. The first path failed on authentication; the second produced a working bidirectional streaming prototype; the third pushed it toward a realistic scenario.</description></item><item><title>In-App Live Assistant</title><link>https://bil.arikan.ca/projects/live-in-app-assistant/</link><pubDate>Fri, 13 Feb 2026 00:00:00 -0500</pubDate><guid>https://bil.arikan.ca/projects/live-in-app-assistant/</guid><description>&lt;p&gt;This is a project I am actively pursuing: an open-source “in-app live assistant” pattern that other product teams can embed.&lt;/p&gt;
&lt;p&gt;The core idea is simple: if the assistant can listen and &lt;em&gt;see the current UI&lt;/em&gt;, it can give concrete next-step guidance instead of generic chatbot answers.&lt;/p&gt;</description></item><item><title>Welcome: This Site as a Learning Lab</title><link>https://bil.arikan.ca/posts/welcome-to-hugo/</link><pubDate>Wed, 11 Feb 2026 00:00:00 -0500</pubDate><guid>https://bil.arikan.ca/posts/welcome-to-hugo/</guid><description>A practical welcome post: goals, current skill focus, and the workflow I am building in public.</description></item></channel></rss>