SelfMade.
Briefing pipeline · v1
The ad-creative system

How we ship a graded batch of ads.

An end-to-end pipeline for a single brand. Brand setup → data brief → concept slate → copy refinement → AI generation → grading → delivery. This page maps the steps and explains what each canonical doc in /Briefing/Docs/ is for, so anyone walking in cold can find the right file in 30 seconds.

On this page

  1. "I'm new — which doc do I open?"
  2. The end-to-end pipeline (Phase A → G)
  3. The six canonical docs
  4. How the docs relate
  5. Glossary

"I'm new — which doc do I open?"

I want to onboard a brand-new client (set up their brand bible, voice agent, spec cards from scratch).
SETUP_NEW_CLIENT.md. Run once per brand. Output: a complete client folder with all the artifacts every batch reads from.
I want to ship a sales proposal for a brand we don't work with yet (cold prospect, no Meta history).
SALES_PROPOSAL_STARTER_PROMPT.md. The full proposal flow — brand scrape, 14 spec ads, hosted proposal site, sendable email.
I want to ship a recurring batch for a brand we already work with (Meta data is ingesting, brand bible exists).
EXISTING_CLIENT_STARTER_PROMPT.md. Pulls Meta + Adlib data, builds a data-driven slate, ships graded PNGs.
I want the canonical reference for HOW the slate, audit, copy refinement, and grading work — not a starter prompt, the playbook itself.
BATCH_GENERATION_PIPELINE.md. The two starter prompts wrap this. Every phase number used elsewhere is defined here.
I'm at the slate-locked-but-not-generated stage and need to know what STOP gate to fill in before burning Fal credits.
AUDIT_TEMPLATE.md. Copy into the batch's Generated Assets/{date}/AUDIT.md. §1 Visual at Phase B; §2 Voice at Phase C. Both are STOP gates.
I'm building the actual generate-ads-gpt2.mjs script — I need the prompt-assembly rules, the SAFE_ZONE block, the Tier-1 hard rules, the Fal API call structure.
GPT_IMAGE_2_PIPELINE.md. The technical reference for Phase D (generation). Has the prompt blocks every batch script reads verbatim.

The end-to-end pipeline (Phase A → G)

Every batch — for any brand, sales proposal or existing client — flows through these seven phases. The two starter prompts (SALES_PROPOSAL_STARTER_PROMPT.md and EXISTING_CLIENT_STARTER_PROMPT.md) are wrappers that orchestrate this same sequence with brand-specific Phase A inputs.

Phase A

Brand setup  /  Data brief

Sales proposal: scrape the brand site, pull design tokens, read the discovery transcript, build spec cards.
Existing client: run pull-insights.mjs against the iteration tracker — Meta time-series + Adlib gap analysis. Produces EXTEND / RETIRE / FILL recommendations.

Stop for user

→ DATA_BRIEF.md / spec cards

Phase B

Concept slate  +  visual diversity audit

Build the 14-row slate (concept · format · persona · angle · emotion · SKU per slot). Pick the format-file variant per slot. Apply the King's Hawaiian archetype quotas (≥2 platform-UGC, ≥1 macro, ≥1 outdoor, ≥1 type-only, ≥1 chaotic). Then fill in AUDIT.md §1 Visual — surface bucket / lighting / primary subject / color cast / Pinterest grid test.

Stop for user

→ SLATE.md + AUDIT.md §1

Phase C

Copy refinement  +  voice diversity audit

Run all 14 ads' COPY through Agents 01–09 (persona fit · angle · emotion · copy excellence · format compliance · brand compliance · Kahneman heuristics · static conversion · ad reviewer). Brand voice agent loads as Agent 06's override. Every ad must clear ≥ 90 on every rubric. Then fill in AUDIT.md §2 Voice — voice register tally / source citation per line / repeated-anchor / brand-swap test.

Hard gate 1Stop for user

→ COPY_REFINED.md + AUDIT.md §2

Phase D

Generate via Fal

Run generate-ads-gpt2.mjs sequentially against openai/gpt-image-2/edit. Per call: brand spec card + visual style card + per-ad SKU reference. Prompt assembled per GPT_IMAGE_2_PIPELINE.md § 3. Saves PNGs immediately on success with 4-attempt fetch retry.

→ Generated Assets/{date}/*.png

Phase E

Agent 10 grading

Run Agent 10 (creative grader, 11-gate rubric) on every PNG. Catches safe-zone violations, paper-inset artifacts, dropped headlines, fabricated stats, banned-word leakage, brand-fit failures. For existing clients, Gate 10 (Brand Performance Resonance) scores each ad's format × persona × angle × emotion against the action plan. Iterate any ad < 90.

Hard gate 2 — every ad ≥ 90

→ AGENT_10_REPORT.md

Phase F

Deliver 9:16 source-of-truth

Move final PNGs into Generated Assets/{date}/Final Delivery/9x16/. Filenames per RESIZE_WORKFLOW.md convention. Save the locked generate-ads-gpt2.mjs + AGENT_10_REPORT.md alongside as the audit trail.

Stop for user before Phase G

→ Final Delivery/9x16/*.png

Phase G

Resize to 1:1 and re-grade

Run resize-folder-to-1x1.mjs over the approved 9:16 folder. Re-grade every 1:1 through Agent 10 (1:1 has different safe-zone constraints — top blocked 0–110 px, bottom blocked 970–1080 px). Manual fallback for any 1:1 < 90.

→ Final Delivery/1x1/*.png

Hard gates. The two non-negotiable production gates are Phase C (every COPY ≥ 90 on each of the 9 agent rubrics) and Phase E (every PNG ≥ 90 on the Agent 10 11-gate rubric). If a batch ships without these, the pipeline didn't run — re-do it.
The diversity STOP gates are at Phase B (visual cluster check) and Phase C (voice register check). They live at the briefing stage, not post-render — catching clustering after Fal generation costs $30+ in regen credits.

The six canonical docs

1. Onboard a new client SETUP_NEW_CLIENT.md

When to use: A new brand has been added (or an existing brand needs full re-setup). Run this once per brand before any batch flow can run for that brand.

What it does

Walks through 12 numbered steps to onboard a new client from cold:

  1. Step 0: Ask for client name, shortcode (3-letter code), website URL, category, vertical, any uploaded materials. Don't guess these.
  2. Step 1: Scan Clients/{Brand}/ for existing assets — brand guidelines, transcripts, style guides, performance data. These are gold and override anything from a website scrape.
  3. Step 2: Read the system files (the agent rubrics, asset-type templates, prompt-assembly rules) so the rest of the steps reference them correctly.
  4. Step 3: Create the client folder structure.
  5. Step 4: Scrape the website and build the brand bible at Clients/{Brand}/{SHORTCODE}-brand-bible.md — 10 sections covering company, product, customer, positioning, voice/tone, compliance, visual direction, personas/angles, paid social process, emotional architecture.
  6. Step 5: Distill the brand bible into a creative onboarding doc at {SHORTCODE}-creative-onboarding.md — the "read this before writing a single line of copy" cheat sheet.
  7. Step 6: Scrape the product catalog → {SHORTCODE}-product-catalog.md.
  8. Step 7: Create the brand voice agent at {SHORTCODE}-brand-voice-agent.md — the client-specific override for Agent 06 (Brand Compliance). 5 weighted scoring dimensions, hard violations list, proprietary truths.
  9. Step 8: Create the client overview at {SHORTCODE}.md.
  10. Step 9: HUMAN TASK — user inspects the brand's website CSS for exact fonts and hex colors. Claude is unreliable at this. Stop and wait for user paste.
  11. Step 10: Build the brand spec card + visual style card as HTML, render to PNG. These get uploaded to Fal alongside every prompt.
  12. Step 11: Create the Nano Banana base prompt template (legacy; modern flow uses GPT Image 2).
  13. Step 12: Verify all files exist. Ask the user "Ready to generate briefs? How many rounds?"

Output (per brand)

Clients/{BRAND}/
  ├ {SHORTCODE}.md                          ← client overview (personas, angles, KPI, team)
  ├ {SHORTCODE}-brand-bible.md              ← 10-section comprehensive brand reference
  ├ {SHORTCODE}-creative-onboarding.md      ← TL;DR brand cheat sheet
  ├ {SHORTCODE}-product-catalog.md          ← every SKU with specs
  ├ {SHORTCODE}-brand-voice-agent.md        ← Agent 06 override (brand-specific)
  ├ {SHORTCODE}-brand-spec-card.png         ← typography + color palette card
  └ {SHORTCODE}-visual-style-card.png       ← photography direction + mood card

Why it matters

Every downstream batch (sales-proposal or existing-client) reads from this folder. Skipping setup or running it half-way is the #1 way to produce off-brand copy — there's nothing for Phase C's Agent 06 to anchor against. Both starter prompts will halt and point here if any prerequisite is missing.

2. Sales proposal end-to-end SALES_PROPOSAL_STARTER_PROMPT.md

When to use: A brand-new prospect — no Meta history, no existing client folder. You want to deliver a complete sales-call-ready proposal: 14 spec ads + hosted proposal site + sendable email.

What it does

Wraps the Phase A → G pipeline with sales-proposal-specific intake (discovery transcript drives voice DNA) and adds Phase F (build proposal site) + Phase G (compose email). The actual ad-generation phases are the same as the existing-client flow — they share the canonical BATCH_GENERATION_PIPELINE.md playbook.

What you need before starting

Slim version

If you only want the 14 ads (no proposal site, no email), run Phases A (steps 1-6), B, C, D, E only. The full prompt has a slim-version block at the bottom of the file with this scope.

3. Existing-client recurring batch EXISTING_CLIENT_STARTER_PROMPT.md

When to use: A brand that's already set up (brand bible / voice agent / spec cards exist) and has Meta data ingesting. You want to ship a graded batch — no proposal site, no email, just the ads.

What it does

Differentiated from the sales-proposal flow by Phase A: the data drives the slate. Instead of voice from a discovery transcript, the iteration tracker provides a spend-weighted action plan that allocates EXTEND / RETIRE / FILL slots automatically.

What you need before starting

Encoded production lessons

This file has accumulated several universal patterns the user codifies after each batch — currently includes:

4. The canonical playbook BATCH_GENERATION_PIPELINE.md

When to read: The two starter prompts wrap this. Read this when you want the canonical phase-by-phase reference — what each phase is supposed to produce, why it exists, and what failure mode it prevents.

What it does

Defines Phase 0 → Phase 8 as the universal sequence, independent of which starter prompt invokes it. Each phase has a documented failure mode the phase exists to prevent (most pulled from the BlackMask 2026-04-29 batch and subsequent learnings).

Why it matters

The starter prompts evolve fast (each batch adds new universal patterns). This file is the durable canonical reference — the failure modes documented here are the seven that have actually shipped at SelfMade and the rules above are the antidote to each. When the starter prompts contradict each other, this is the source of truth.

5. The slate audit AUDIT_TEMPLATE.md

When to use: At Phase B and Phase C of every batch. Copy this template into Generated Assets/{date}/AUDIT.md. Fill in §1 at Phase B; §2 at Phase C. Both are STOP gates the user has to approve before Phase D burns Fal credits.

What problem it solves

Per-ad rubrics (Phase 4's 9-agent refinement, Phase 6's Agent 10 grader) score each ad independently. They cannot catch slate-level diversity failures:

Both are slate-level failures. This template audits the slate as a whole.

Two sections, two phases

Why briefing stage, not QC

Diversity is a property of the slate, not of any individual rendered ad. Catching clustering after Fal generation costs $30+ in regen per batch. The audit lives at the briefing stage — STOP gates at Phase B and Phase C — so the cluster gets fixed in the prompt, not in the rendered output.

6. The technical generation reference GPT_IMAGE_2_PIPELINE.md

When to use: At Phase D (generation). The script template, prompt-assembly rules, SAFE_ZONE block, Tier-1 hard rules, and Fal API call structure all live here. The starter prompts reference this doc verbatim — every generate-ads-gpt2.mjs script in any batch reads its prompt blocks from this file.

What's in it

  1. § 0 When to use this — picks between this doc, nano_banana_revision_prompt.md, and BATCH_GENERATION_PIPELINE.md.
  2. § 1 Setup — files needed on disk per client (spec cards, logo, product assets), aspect-ratio constraints (Fal hard rejects reference images > 3:1 long-edge), fal.ai credentials, image size selection (9:16 / 1:1 / 4:5).
  3. § 2 Brief input — the per-ad input block: brand, product, persona, angle, tone adjectives, hard rules, format, scene language, headline / subhead / callouts / CTA / kicker, people override.
  4. § 3 Assembly rules — 6 blocks Claude assembles in this exact order:
    1. § 3.1 Preamble (brand context)
    2. § 3.2 SAFE_ZONE block (verbatim) — the hard pixel constraints + background-continuity rules + coordinate-interpretation key. Copy this verbatim into every prompt.
    3. § 3.3 Brand typography (per spec card)
    4. § 3.4 Format-specific scene template (one of 22 format archetypes — features-benefits, headline, testimonial, before-after, ugc-tiktok, ugc-ig-story, handwriting-postit, press, etc.)
    5. § 3.5 Copy block (literal text only — no labels)
    6. § 3.6 Tier 1 rules (verbatim) — 14 hard-fail rules covering safe zones, instruction leakage, single continuous image, product fidelity, realistic light, no floating objects, sharp 0px corners, no em-dashes, brand banned words, typography fidelity, no paper-inset artifacts, scene specificity, brand DNA gate, variant rotation.
  5. § 4 Generation script — the canonical generate-ads-gpt2.mjs template. fal.subscribe("openai/gpt-image-2/edit", ...). Sequential per-ad calls. 4-attempt fetch retry. Save on success.
  6. § 5 Surgical product + lock matching — only upload the products that actually appear in the scene. Don't pass all SKUs as references on every call.
  7. § 6 Custom dimensions — 1088×1920 (9:16), 1080×1080 (1:1), 1080×1350 (4:5).
  8. § 7 Reference table — per-format expected count of text elements, scene archetype, common variants.
  9. § 8 Production learnings — accumulated lessons from real batches: text rendering, product fidelity, scene realism, the safe-zones-important-caveat, King's Hawaiian production learnings, Luma UNI-1 production learnings, copy fidelity, multiple reference images, aspect ratio fidelity, retries and stability.
  10. § 9 Picking a model — GPT Image 2 vs Nano Banana 2 trade-offs.

Why it matters

The SAFE_ZONE block in § 3.2 is the difference between an ad that ships and an ad with a headline rendered into Instagram's top status-bar zone. The Tier 1 rules in § 3.6 are the difference between an ad with a real-looking can on a real surface and an ad with paper-inset artifacts that the model invents. Every one of these rules came from a specific failure that shipped — they are not theoretical.

How the docs relate

The starter prompts (sales-proposal and existing-client) are orchestration wrappers. They invoke the canonical pipeline, point at the right brand-specific files, and add use-case-specific deliverables (proposal site for sales; nothing extra for existing-client). The pipeline doc is the canonical sequence. The audit template is a STOP gate inside the pipeline. The technical reference (GPT Image 2) is consumed at Phase D.

SETUP_NEW_CLIENT.md └─ run once per brand → produces brand bible, voice agent, spec cards, product catalog ▼ Clients/{Brand}/ (the brand's source-of-truth folder) ▼ ┌──────────────────────────┴──────────────────────────┐ │ │ SALES_PROPOSAL_STARTER_PROMPT.md EXISTING_CLIENT_STARTER_PROMPT.md (cold prospect, full proposal package) (existing client, recurring batch) │ │ └──────────────────────────┬──────────────────────────┘ ▼ BATCH_GENERATION_PIPELINE.md (Phase 0 → 8 canonical reference) │ ├──── Phase 3 + 3.5 + 4.5 reference → AUDIT_TEMPLATE.md │ (§1 Visual at Phase B, │ §2 Voice at Phase C) │ └──── Phase 5 references → GPT_IMAGE_2_PIPELINE.md (prompt-assembly, SAFE_ZONE block, Tier 1 rules, script template)

Glossary

TermWhat it means
Phase A–GThe seven canonical phases of any batch. A=brand setup/data brief · B=concept slate · C=copy refinement · D=generate · E=grade · F=deliver 9:16 · G=resize to 1:1.
Phases 0–8Same flow, indexed differently in BATCH_GENERATION_PIPELINE.md. 0=brand setup · 1=concept slate · 2=variant selection · 3=visual audit · 3.5=thumbnail one-liners · 4=copy refinement · 4.5=voice audit · 5=generate · 6=grade · 7=iterate · 8=ship.
EXTEND / FILL / RETIRESlot allocation buckets in the existing-client flow. EXTEND = replicate proven winners. FILL = fill Adlib gaps with untested types. RETIRE = avoid dimensions that have failed.
King's Hawaiian quotasMandatory archetype quotas in every 12-20 ad batch: ≥ 2 platform-native UGC · ≥ 1 macro extreme close-up · ≥ 1 outdoor/location · ≥ 1 type-only/no-product · ≥ 1 chaotic tablescape. Named after the King's Hawaiian 2026-04-29 batch where their absence produced 9 of 14 monotonous editorial ads.
Hard gate 1Phase 4 / Phase C — every ad's COPY must score ≥ 90 on each of the 9 ad-pipeline rubrics before Fal generation.
Hard gate 2Phase 6 / Phase E — every generated ad must score ≥ 90 on Agent 10's 11-gate rubric before delivery.
Agent 06 brand-voice overrideThe brand-specific voice agent at Clients/{Brand}/{SHORTCODE}-brand-voice-agent.md replaces the generic Agent 06 (Brand Compliance) for that brand. 5 weighted scoring dimensions + brand's 8-10 proprietary truths.
SAFE_ZONE blockThe verbatim hard-pixel-constraint block from GPT_IMAGE_2_PIPELINE.md § 3.2. Every Phase D prompt prefixes this. Top 400 px and bottom 400 px must be text-free. Without it, GPT Image 2 routinely places text in Instagram's top-overlay zone.
Tier 1 rulesThe 14 hard-fail rules from GPT_IMAGE_2_PIPELINE.md § 3.6 appended to every prompt. Each came from a real shipped failure.
Visual cluster failureSlate ships with 4+ ads reading as the same visual mood at thumbnail despite using "different" format archetypes. Caught by AUDIT_TEMPLATE.md §1.
Voice register over-indexingSlate skews heavily to one voice register (founder-podcast, award-credibility, etc.) even though every ad's copy passes the 9-agent refinement. Caught by AUDIT_TEMPLATE.md §2.
Iteration trackerThe internal data warehouse at insights.selfmade.co. Nightly cron syncs Meta ad-grading per client. Powers Phase A in the existing-client flow via pull-insights.mjs.
AdlibThe vertical-benchmark service at adlib.getskipper.ai. Compares a brand's ad mix against the rest of its vertical (Beauty, Fashion, F&B, Wellness, Fitness, Pet, Home, Tech) and surfaces format/persona/angle gaps. Powers the FILL bucket in the data brief.