Skip to content

Video Workflow

AI-powered product video production pipeline — from concept to final render, orchestrated by a single agent skill.

End-to-end product video production through AI agent skills. Nine specialized skills, one orchestrator. Give your agent a project directory and a product URL — it handles everything from competitor research to final render.

Phase 0: CONTEXT ──── Phase 1: STORYBOARD ──── Phase 2: PRODUCTION ──── Phase 3: VISUAL QA
│ │ │ │
├ product-context ├ vsl-storyboard-writer ├ recording-checklist ├ Brand audit
├ searching-videos ├ storyboard (6-frame) ├ voiceover-tts ├ Animation check
├ getting-videos ├ audio-director plan ├ audio-director asm ├ Storyboard match
└ analyzing-videos └ seedance-prompts └ Export + subtitles └ User review

Phase 0 → 3 are documented across two pages:Research Skills (Phase 0) — product-context, searching, getting, analyzing
Production Skills (Phase 2) — voiceover, audio, recording, Seedance


The master orchestrator. Call this first — it routes to all other skills automatically.

/iopho-video-director ./exp/my-project # new project
/iopho-video-director ./exp/my-project --mode continue # resume from last checkpoint
/iopho-video-director ./exp/my-project --phase 2 # jump to specific phase

new (default) — Start fresh. Asks 5 onboarding questions, then runs Phase 0:

  1. What product? — Name + URL
  2. What kind of video? — Demo / Explainer / Launch / Teaser / Feature / Comparison
  3. How long? — 15s / 30s / 60s / 90s / 120s+
  4. Where will it live? — YouTube / Product Hunt / TikTok / App Store / B站 / XHS / LinkedIn
  5. Any references? — Video URLs you like (fed into Phase 0 research)

Creates project-plan.md then begins Phase 0.

continue — Reads project-plan.md, detects last completed phase, resumes:

no project-plan.md → switches to "new"
context.md missing → Phase 0
storyboard.md missing → Phase 1
no out/video.mp4 → Phase 2
no QA pass noted → Phase 3

jump --phase N — Jumps to a specific phase with prerequisite checks:

Phase 1 requires: context.md
Phase 2 requires: context.md + storyboard.md + audio-plan.md
Phase 3 requires: out/video.mp4
PhaseSkills calledKey outputsCheckpoint
0 Contextproduct-context → searching → getting → analyzingcontext.md, research/storyboards/*.storyboard.md, pattern-analysis.mdUser approves context summary
1 Storyboardvsl-storyboard-writer → audio-director planstoryboard.md, vo-script.md, audio-plan.mdUser reviews storyboard — changes are cheapest here
2 Productionrecording-checklist → voiceover-tts → audio-director assemble → seedance-promptsout/video.mp4, audio/master-audio.mp3, *.srtUser watches final render
3 Visual QA(inline — no separate skill)QA passBrand + animation + platform checks

Visual QA runs inline without a separate skill. The director checks:

Brand compliance — colors, fonts, logo usage, tone match context.md
Animation restraint — no excessive zoom/pan, one focal animation per scene, no distracting transitions
Storyboard conformance — each scene matches storyboard intent, VO lines synced to visuals
Platform checks:

PlatformCheck
YouTubeThumbnail-worthy first frame? End screen space?
Product HuntWorks without sound? Auto-play friendly?
TikTok / ReelsVertical crop preserves key content?
App StoreWithin 30s limit? Shows real app UI?
LinkedInProfessional tone? Subtitles visible?

After a full pipeline run:

{project-dir}/
├── project-plan.md ← phase tracker + decision log
├── context.md ← product/brand/audience context
├── storyboard.md ← scene-by-scene breakdown
├── vo-script.md ← VO lines + timecodes
├── audio-plan.md ← BGM + VO + SFX strategy
├── recording-checklist.md ← shot list for screen recording
├── research/
│ ├── storyboards/ ← reference .storyboard.md files
│ ├── downloads/ ← downloaded reference videos
│ └── pattern-analysis.md
├── public/
│ ├── videos/ ← raw screen recordings
│ ├── voiceover/ ← VO segments + master-vo.mp3
│ └── audio/ ← BGM + master-audio.mp3
└── out/
├── video.mp4 ← final render
├── video-vertical.mp4 ← 9:16 cut (if needed)
├── video.srt ← subtitles
└── video.vtt

Terminal window
# Install all video skills at once
npx skills add iopho-team/iopho-skills --skill iopho-video-director
npx skills add iopho-team/iopho-skills --skill iopho-product-context
npx skills add iopho-team/iopho-skills --skill iopho-searching-videos
npx skills add iopho-team/iopho-skills --skill iopho-getting-videos
npx skills add iopho-team/iopho-skills --skill iopho-analyzing-videos
npx skills add iopho-team/iopho-skills --skill iopho-voiceover-tts
npx skills add iopho-team/iopho-skills --skill iopho-audio-director
npx skills add iopho-team/iopho-skills --skill iopho-recording-checklist
npx skills add iopho-team/iopho-skills --skill iopho-seedance-prompts
# Or all iopho skills at once
npx skills add iopho-team/iopho-skills --all -y

Global install (available across all projects):

Terminal window
npx skills add iopho-team/iopho-skills --all -g -y
SkillPhasePurpose
iopho-video-directorAllMaster orchestrator
iopho-product-context0Project intake → context.md
iopho-searching-videos0Cross-platform video search
iopho-getting-videos0Download video/audio/subtitles
iopho-analyzing-videos0Reverse-engineer → .storyboard.md
iopho-voiceover-tts2Multi-engine TTS + assembly
iopho-audio-director1–2BGM + VO + SFX planning and mixing
iopho-recording-checklist2Screen recording shot list
iopho-seedance-prompts1–2Seedance 2.0 AI video generation

Source: github.com/iopho-team/iopho-skills