No code. No prior automation experience. A running agent in 2 weeks.
Every website with more than 12 months of published content has the same invisible problem: pages that are quietly losing rankings, impressions, and clicks — and nobody's noticed yet. Fixing those pages is the highest-ROI SEO action most teams never take systematically, because finding them manually takes hours. This workshop solves that. In two focused weeks, you'll build a Content Freshness Agent — a real, running Python script that connects to your Google Search Console, detects decaying content automatically, scores each page by fix priority, and drops a ranked action list into Google Sheets every week. No manual checking. No spreadsheet archaeology. Just a clear list of what to fix, in what order, waiting for you on Monday morning.
A live Google Sheet, updated every Monday, with your decaying content ranked by fix priority. This is what you open on Monday morning instead of manually digging through GSC.
| Page URL | Impressions (90d) | Clicks (90d) | Position | Decay Signal | Priority | Action |
|---|---|---|---|---|---|---|
| /blog/ai-seo-strategy | 4,820 | 38 | 14.2 | CTR collapse | 9 / 10 | Rewrite title + intro |
| /blog/content-audit-guide | 3,100 | 61 | 8.7 | Position drop | 8 / 10 | Update stats + examples |
| /blog/keyword-research-2023 | 2,240 | 14 | 22.1 | Stale date signal | 6 / 10 | Republish + expand |
| /blog/seo-checklist | 890 | 9 | 31.4 | Traffic decline | 3 / 10 | Consolidate or redirect |
A real Python script connected to your GSC, scoring your content, writing to Google Sheets — automatically, every week.
You'll know how to direct Claude Code to build, debug, and extend agents. A transferable skill for every automation you build next.
Every Monday: a prioritised list of content to fix, with decay signals and action recommendations — generated without you touching it.
A scheduled Python script that runs every Monday, connects to your Google Search Console, analyses performance trends for every indexed page, uses Claude to detect decay patterns and generate fix recommendations, and writes a prioritised output to Google Sheets — all without you doing anything.
Week 1 has one goal: get data flowing. By Friday, your agent is connected to your GSC, pulling real data, and producing its first (rough) output. The code doesn't need to be perfect — it needs to run.
Walkthrough of the agent architecture. Install Claude Code and Python environment together. Cover the core workflow: describe → generate → run → paste error → fix. No prior coding knowledge assumed.
Step-by-step guide (with video) to creating your Google Cloud project, enabling Search Console API + Sheets API, downloading credentials.json, and placing it correctly. Claude Code guides you through any errors.
Using Claude Code prompts provided in the workshop guide, build the authentication module and the GSC data ingestion script. Goal: print your first 10 rows of real GSC data in the terminal. Paste errors back in and fix together.
Group troubleshooting session. Everyone shares their terminal output — working or not. Common errors diagnosed live. By end of session, every participant has GSC data flowing. No one left behind.
Your script connects to GSC and prints live performance data for your site. Foundation for everything in Week 2.
Week 2 turns raw data into intelligence. You'll add the decay detection logic, the Claude scoring layer, the Google Sheets output, and finally the scheduler — so the agent runs without you every week from here on.
Prompt Claude Code to write the decay detection logic (position drops, CTR collapse, impression decline) and the Claude API scoring module. The agent now flags and scores every decaying page with a priority 1–10 and a plain-English action recommendation.
The default scoring prompt is a starting point — not the final word. This async session shows you how to customise the scoring criteria for your specific site and business: what decay patterns matter most, what pages to deprioritise, how to weight impact vs. effort.
Prompt Claude Code to write the Sheets output module (new tab per week, conditional colour formatting on priority scores) and the main orchestration script. Set up the cron job or GitHub Actions workflow to run every Monday at 8am automatically.
Every participant runs their agent live and shares their Google Sheet output. Group review of results and scoring quality. Introduction to what the next workshop builds on top of this foundation — and how to extend the agent yourself.
A real Python agent running every Monday: pulling GSC data, detecting decay, scoring with Claude, and writing a prioritised fix list to Google Sheets.
You describe what you want. Claude Code writes it. No blank file, no manual debugging — you direct, it builds.
Free, official, and already connected to your site. The GSC API gives clicks, impressions, CTR, and position by page.
Evaluates each decaying page, determines severity and fix type, and writes the action recommendation in plain English.
A new tab every Monday. Priority-colour-coded. Shared with your team or used solo. No dashboard setup required.
Run locally or in Google Colab (free, browser-based, no install). Colab is recommended for absolute beginners.
Mac/Linux cron for local runs. GitHub Actions for cloud-based scheduling — free tier is sufficient for weekly runs.
This workshop is the entry point. It stands alone — but it's also the foundation everything else builds on.
One agent. Two weeks. Running in your stack.
Strategy, architecture, and AI-assisted workflows.
Full infrastructure. Self-improving feedback loop.
Cohort size capped at 10 · Spots fill fast · No refunds after build session 1