Content Creation GitHub Copilot Skill
February 9, 2026
Implementation plan for building a GitHub Copilot skill that enables natural language content creation
The Vision
Build a skill for GitHub Copilot that enables natural language content creation for my Next.js blog. Instead of manually filling templates or typing commands, just tell Copilot "make a blog post about X" and it handles everything.
What We're Building: Skills vs Extensions vs MCP
GitHub Copilot Skills (what this doc describes):
- Custom capabilities you build for YOUR workflow
- Private to your codebase
- Run locally in VS Code
- Fast, no infrastructure needed
- Examples:
validate_post_metadata,scaffold_content_file
GitHub Copilot Extensions (ignore these):
- Marketplace integrations for vendors (Docker, Stripe, etc.)
- Hosted services, require publishing/approval
- Sparse documentation, feels abandoned
- Not relevant for personal workflow automation
MCP Servers (alternative to consider later):
- Open protocol by Anthropic
- Works across multiple AI clients (Claude Desktop, Cline, etc.)
- Can run locally or remotely
- Better for portability, but more overhead
Our choice: Skills. Local, fast, in-codebase, zero infrastructure. If we need cross-client portability later, we can factor out to MCP servers. Extensions are a dead end.
Why This Matters
This applies the determinism vs semantics paradigm I've been thinking about:
- Semantic layer: Claude understands intent ("blog post about archetypes")
- Deterministic layer: npm script extracts metadata via AI API, fills template perfectly, creates file
- Approval loop: Review, regenerate, or edit before finalizing
Hugo has archetypes (static templates + CLI commands). This is archetypes on steroids - intelligent, context-aware, conversational.
MVP Scope
What Gets Built
- SKILL.md - Teaches Claude when and how to create content
- npm script (
npm run new) - Handles the actual generation - One template - Blog post frontmatter structure
The Flow
User: "create a blog post about archetypes"
β
Copilot (reads skill): "User wants blog content, use npm run new"
β
npm script: Calls Anthropic API with tool use for structured output
β
AI returns: { title, description, tags }
β
Script: Fills template, creates content/posts/slug.md
β
Copilot: "Created content/posts/understanding-archetypes.md" (shows for approval)
Technical Details
Skill Structure
# Content Creator
When to use: User asks to create blog posts
Command: npm run new "<user-input>"
Process: Command handles AI extraction β template fill β file creation
npm Script (scripts/new-content.ts)
The script will:
- Take user input as argument
- Call Anthropic API with tool use
- Use tool schema to define structured output: { title, description, tags }
- Fill template with the extracted data
- Write to appropriate content directory
- Return file path
Why Tool Use Instead of "JSON Mode"
I was surprised to learn OpenAI markets "Structured Outputs" as revolutionary, but Claude's tool use does the same thing - define a schema, get guaranteed valid JSON. Building our own "structured outputs"!
Future Expansion
- Multiple content types (projects, notes, docs)
- Multiple templates
- Image generation integration
- Life logging (the bigger vision)
What Makes This Different
Hugo archetypes: Static template + manual command
This: Natural language β intelligent extraction β perfect template β approval loop
It's the "AI as action engine" paradigm - highly reasoned data that can be acted upon in the real world.
The Meta-Realization
GitHub Copilot is built on this exact paradigm.
When I use different tools vs. just text responses in chat, I'm doing the same thing:
- Semantic layer: Interpreting intent from context clues ("create a file" vs "explain this")
- Deterministic layer: Calling different tools (
create_file,read_file,replace_string_in_filevs just text response) - Approval loop: Waiting for user to approve, regenerate, or edit
Copilot is a reasoning layer that routes to deterministic tools based on semantic understanding. Skills teach when to use certain tools.
GitHub Copilot is literally the product version of this.
Content creator skill = teaching Copilot a new tool to use
Npm script = the deterministic backend Copilot calls
This isn't just "a cool trick" - it's the fundamental pattern for AI-native software. AI as action engine with a reasoning layer between human intent and deterministic systems.
Next Steps
- Write the SKILL.md file
- Build the npm script with tool use
- Test with real content creation
- Iterate on prompt and schema
- Add more content types once MVP works
Update: Refined Implementation Architecture
2026-02-09
Final Directory Structure
content-creation/ # The whole system in one place
βββ new-content.ts # Main script (generic, handles all types)
βββ content-types.ts # Config & schemas for each content type
βββ templates/ # All templates together
β βββ blog-post.md
β βββ note.md
β βββ project.md
β βββ doc.md
βββ README.md # How this system works
.github/skills/ # GitHub Copilot skills directory
βββ new-content/ # Generic skill name
βββ SKILL.md # References ../../../content-creation/
Command & Naming
- Command:
npm run content(clear and concise) - Usage:
npm run content "blog post about archetypes" - Script:
content-creation/new-content.ts
Extensible Design: Multiple Content Types
The system supports multiple content types from day one using the "multiple tools" approach:
How it works:
- Script defines a tool for each content type
- Single API call with all tools available
- AI picks the right tool based on user input
- Returns type-specific structured metadata
- Script fills appropriate template
Example tools:
tools: [
{
name: "create_blog_post",
schema: { title, slug, description, tags }
},
{
name: "create_note",
schema: { title, topic, tags }
},
{
name: "create_project",
schema: { title, description, tech, status }
}
]
Config File Structure (content-types.ts)
export const contentTypes = {
'blog-post': {
template: 'templates/blog-post.md',
outputDir: 'content/posts',
schema: { title, slug, description, tags }
},
'note': {
template: 'templates/note.md',
outputDir: 'content/notes',
schema: { title, topic, tags }
},
'project': {
template: 'templates/project.md',
outputDir: 'content/projects',
schema: { title, description, tech, status }
}
}
Template Format
Using .md files to match the existing content structure:
---json
{
"title": "{{title}}",
"slug": "{{slug}}",
"date": "{{date}}",
"author": ["Jay Griffin"],
"type": "post",
"description": "{{description}}",
"tags": {{tags}}
}
---
# {{title}}
[Content starts here]
Why This Architecture
Modular & Self-Contained:
- Everything content-related lives in one place
- Easy to understand as a cohesive system
- Could move this whole folder to another project
- Self-documenting logical unit
Extensible:
- Add new content type = add template + schema definition
- No code changes needed for new types
- Config-driven approach
Clean Tool Use Pattern:
- One API call handles type detection + metadata extraction
- AI naturally picks the right tool
- Follows "tool use = structured outputs" paradigm
Future-Proof:
- Built for expansion from day one
- Multiple content types supported
- Easy to add image generation, life logging, etc.
Files to Create
.github/skills/new-content/SKILL.md- Generic skill for all content typescontent-creation/new-content.ts- Smart router scriptcontent-creation/content-types.ts- Extensible configcontent-creation/templates/blog-post.md- Blog post templatecontent-creation/templates/note.md- Note templatecontent-creation/README.md- System documentation- Update
package.json- Add"content": "tsx content-creation/new-content.ts"
The Meta Moment
While planning this implementation, GitHub Copilot read this very document and used it to design the system described in it. The human asked: "do you want to see how meta you can be by reading a post about your new skill and then giving yourself that skill?"
Final Architecture: Separate Files + Manifest System
2026-02-09 - Refined after deep discussion
After working through implementation details, we arrived at a cleaner, more database-ready architecture that avoids string manipulation entirely.
Core Principles
- No String Manipulation - Work with pure data structures
- Metadata Separate from Content - JSON files, not frontmatter
- UUID-Based Linking - Filenames can change, IDs don't
- Manifest as Cache - Regeneratable index, not source of truth
- Database-Ready - Designed for easy migration to Postgres later
File Structure
content/posts/
βββ index.json β Manifest (generated, can be rebuilt)
βββ understanding-archetypes.md β Pure markdown, no frontmatter
βββ understanding-archetypes.json β Metadata with UUID
βββ semantic-vs-deterministic.md
βββ semantic-vs-deterministic.json
content-creation/
βββ new-content.ts β Main script
βββ rebuild-manifest.ts β Manifest regeneration utility
βββ templates/
β βββ blog-post.md
β βββ note.md
βββ README.md
Metadata File Format
Each post has a JSON file with UUID and metadata:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"slug": "understanding-archetypes",
"title": "Understanding Archetypes",
"description": "Deep dive into archetypal patterns",
"tags": ["psychology", "patterns"],
"author": ["Jay Griffin"],
"type": "post",
"date": "2026-02-09T00:00:00Z"
}
Key points:
- UUID is the permanent identifier
- Slug is for URLs/filenames (can change)
- All metadata in one structured file
- No parsing needed - direct JSON.parse()
Manifest System
The manifest is an index that maps UUIDs to file paths:
{
"posts": {
"550e8400-e29b-41d4-a716-446655440000": {
"contentFile": "understanding-archetypes.md",
"metadataFile": "understanding-archetypes.json"
}
}
}
Critical insight: Manifest is regeneratable
// npm run rebuild-manifest
function rebuildManifest() {
const manifest = { posts: {} }
// Scan all metadata files (they have the UUID)
const metaFiles = fs.readdirSync('content/posts')
.filter(f => f.endsWith('.json') && f !== 'index.json')
for (const file of metaFiles) {
const meta = JSON.parse(fs.readFileSync(file))
const slug = meta.slug
manifest.posts[meta.id] = {
contentFile: `${slug}.md`,
metadataFile: file
}
}
fs.writeFileSync('index.json', JSON.stringify(manifest, null, 2))
}
Benefits:
- Git conflict in manifest? Just regenerate it
- Manifest corrupted? Rebuild from files
- Can even
.gitignoremanifest and generate on build - Source of truth is in the files themselves
The Script: Zero String Manipulation
// Get AI metadata
const metadata = {
id: crypto.randomUUID(),
slug: aiResponse.slug,
title: aiResponse.title,
description: aiResponse.description,
tags: aiResponse.tags,
date: new Date().toISOString(),
author: ["Jay Griffin"],
type: "post"
}
// Write metadata - pure JSON
fs.writeFileSync(
`content/posts/${metadata.slug}.json`,
JSON.stringify(metadata, null, 2)
)
// Write content - clean markdown
const content = `# ${metadata.title}\n\n[Your content here]`
fs.writeFileSync(
`content/posts/${metadata.slug}.md`,
content
)
// Update manifest
updateManifest(metadata.id, {
contentFile: `${metadata.slug}.md`,
metadataFile: `${metadata.slug}.json`
})
No parsing, no regex, no string replacement. Just data.
Database Migration Path
When ready to move to Postgres:
CREATE TABLE posts (
id UUID PRIMARY KEY, -- Use existing UUID
slug TEXT UNIQUE, -- Use existing slug
title TEXT,
description TEXT,
tags TEXT[],
content TEXT,
created_at TIMESTAMP,
updated_at TIMESTAMP
);
-- Migration is just reading JSON files
INSERT INTO posts (id, slug, title, ...)
SELECT
metadata.id,
metadata.slug,
metadata.title,
...
FROM json_files;
No data transformation needed - UUIDs are already there.
Performance: Why JSON Files Win (For Now)
For a static Next.js blog:
JSON Files + Static Generation:
- Build time: Read all files once (~1-5ms per file)
- Runtime: Serve pre-generated HTML (~10-50ms)
- Zero database queries
- CDN-friendly, infinitely cacheable
vs. Postgres:
- Runtime: DB query (~50-200ms) per request
- Connection overhead
- Requires infrastructure
When you'd switch to Postgres:
- User-generated content (comments, likes)
- Real-time updates
- Complex relational queries
- Collaborative features
For content YOU create, JSON files are actually faster. The manifest system makes it database-ready when you need it.
The Database Question: When Should We Actually Migrate?
The key insight: We've already structured data like a database (UUIDs, normalized fields, indexes). We've paid the conceptual complexity cost. So what are files actually giving us?
What Files Provide (vs Database):
- Git-based workflow - version control, diffs, commit history for content
- Zero infrastructure - no connection strings, no migrations, no ops overhead
- Portability - content is just files, trivial to backup/move
- Simplicity - no server to manage, no authentication to configure
That's it. If these four things aren't valuable, there's no reason to maintain the file-based system.
Migrate to Database When:
- β Git workflow for content isn't valuable to you
- β You need complex queries (filtering, sorting across multiple fields)
- β Manifest management feels like busywork
- β Multiple people need to edit content concurrently
- β You want dynamic features files can't support
Stay with Files When:
- β Git diffs for content changes are useful
- β You value committing content alongside code
- β Static generation performance matters
- β Zero infrastructure is a feature, not a limitation
- β You're the primary/only content creator
The Decision Framework:
Ask yourself: "What does git-based content management give me that's worth maintaining the file system complexity?"
If the answer is "not much," migrate to a database. If version control and simplicity are still valuable, keep files. The architecture ensures migration is straightforward whenever you're ready - UUIDs and JSON structure work with any database.
Hybrid Option:
- Content β Database (Postgres, MongoDB, or headless CMS)
- Code/templates β Git
- Best of both worlds, but loses content-in-git benefits
Bottom line: You've designed for easy migration. The file system isn't a trap. Use it while it's useful, migrate when it's not.
Updated Tool Flow
User: "create a blog post about archetypes"
β
GitHub Copilot (via skill): npm run content "blog post about archetypes"
β
Script:
1. Generate UUID
2. Call Anthropic API with tool use (structured output)
3. Write metadata JSON (no parsing needed)
4. Write content MD (pure markdown)
5. Update manifest index
β
Output:
- content/posts/understanding-archetypes.md
- content/posts/understanding-archetypes.json
- content/posts/index.json (updated)
β
GitHub Copilot: Shows files for approval
Pure data operations. No string manipulation. Database-ready.
Meta: Automating This Very Workflow
Current workflow:
- Chat with AI throughout the day
- End session by requesting an md file summarizing thoughts
- Download to
content/md/in repo - Manually add frontmatter with metadata
- Commit
This could be automated too:
File watcher on content/md/ that triggers when new file is added:
// watches content/md/
// on new file: calls AI API
// extracts: title, description, tags, date from file content
// adds frontmatter to top of file
// done
Even better - make it a skill: "Copilot, I just downloaded this artifact to content/md/, add the frontmatter"
Or just have the file watcher auto-trigger so you never think about it.