Automate a weekly pipeline report with HubSpot and Slack using an agent skill
Prerequisites
- Claude Code, Cursor, or another AI coding agent that supports skills
- HubSpot private app token stored as an environment variable (
HUBSPOT_TOKEN) - Slack Bot Token stored as an environment variable (
SLACK_BOT_TOKEN) - A Slack channel ID for delivery (
SLACK_CHANNEL_ID)
Overview
Instead of building a persistent automation, you can create an agent skill — a reusable instruction set that tells your AI coding agent how to fetch pipeline data from HubSpot, calculate metrics, and post a formatted report to Slack. This works with Claude Code, and the open Agent Skills standard means the same skill can work across compatible tools.
This approach is ideal for teams that want to iterate quickly on report format without maintaining automation infrastructure.
Step 1: Create the skill directory
Set up a skill directory in your project:
mkdir -p .claude/skills/pipeline-report/scriptsStep 2: Write the SKILL.md file
Create .claude/skills/pipeline-report/SKILL.md:
---
name: pipeline-report
description: Generates a weekly pipeline report from HubSpot and posts it to Slack. Fetches all active deals, calculates pipeline metrics, and sends a formatted Block Kit message.
disable-model-invocation: true
allowed-tools: Bash(python *)
---
Generate a weekly pipeline report by running the bundled script:
1. Run: `python $SKILL_DIR/scripts/report.py`
2. Review the output for any errors
3. Confirm the report was posted to SlackKey configuration:
disable-model-invocation: true— this skill has external side effects (posting to Slack), so only you can trigger it with/pipeline-reportallowed-tools: Bash(python *)— restricts the skill to only running Python scripts, preventing unintended shell commands
Step 3: Write the report script
Create .claude/skills/pipeline-report/scripts/report.py:
#!/usr/bin/env python3
"""
Weekly Pipeline Report: HubSpot → Slack
Fetches active deals, calculates metrics, posts to Slack.
"""
import os
import sys
from datetime import datetime, timezone
try:
import requests
except ImportError:
os.system("pip install requests slack_sdk -q")
import requests
from slack_sdk import WebClient
# --- Config ---
HUBSPOT_TOKEN = os.environ.get("HUBSPOT_TOKEN")
SLACK_TOKEN = os.environ.get("SLACK_BOT_TOKEN")
SLACK_CHANNEL = os.environ.get("SLACK_CHANNEL_ID")
if not all([HUBSPOT_TOKEN, SLACK_TOKEN, SLACK_CHANNEL]):
print("ERROR: Missing required env vars: HUBSPOT_TOKEN, SLACK_BOT_TOKEN, SLACK_CHANNEL_ID")
sys.exit(1)
HEADERS = {"Authorization": f"Bearer {HUBSPOT_TOKEN}", "Content-Type": "application/json"}
# --- Fetch pipeline stages ---
print("Fetching pipeline stages...")
stages_resp = requests.get("https://api.hubapi.com/crm/v3/pipelines/deals", headers=HEADERS)
stages_resp.raise_for_status()
stage_map = {}
for pipeline in stages_resp.json()["results"]:
for stage in pipeline["stages"]:
stage_map[stage["id"]] = stage["label"]
# --- Fetch all deals ---
print("Fetching deals...")
all_deals = []
after = 0
while True:
resp = requests.post(
"https://api.hubapi.com/crm/v3/objects/deals/search",
headers=HEADERS,
json={
"filterGroups": [{"filters": [{"propertyName": "pipeline", "operator": "EQ", "value": "default"}]}],
"properties": ["dealname", "amount", "dealstage", "closedate", "hs_lastmodifieddate"],
"sorts": [{"propertyName": "amount", "direction": "DESCENDING"}],
"limit": 100,
"after": after,
},
)
resp.raise_for_status()
data = resp.json()
all_deals.extend(data["results"])
if "paging" in data and "next" in data["paging"]:
after = data["paging"]["next"]["after"]
else:
break
print(f"Found {len(all_deals)} deals")
# --- Calculate metrics ---
total_value = 0
by_stage = {}
stale_deals = []
for deal in all_deals:
props = deal["properties"]
amount = float(props.get("amount") or 0)
total_value += amount
stage_id = props.get("dealstage", "unknown")
stage_name = stage_map.get(stage_id, stage_id)
by_stage[stage_name] = by_stage.get(stage_name, 0) + 1
last_mod = datetime.fromisoformat(props["hs_lastmodifieddate"].replace("Z", "+00:00"))
days_stale = (datetime.now(timezone.utc) - last_mod).days
if days_stale > 14:
stale_deals.append({"name": props["dealname"], "amount": amount, "days": days_stale})
stale_deals.sort(key=lambda d: d["days"], reverse=True)
# --- Build Slack message ---
stage_text = "\n".join(f"• {name}: {count} deals" for name, count in by_stage.items())
blocks = [
{"type": "header", "text": {"type": "plain_text", "text": "📊 Weekly Pipeline Report"}},
{"type": "section", "fields": [
{"type": "mrkdwn", "text": f"*Total Pipeline*\n${total_value:,.0f}"},
{"type": "mrkdwn", "text": f"*Active Deals*\n{len(all_deals)}"},
]},
{"type": "divider"},
{"type": "section", "text": {"type": "mrkdwn", "text": f"*Deals by Stage*\n{stage_text}"}},
]
if stale_deals:
stale_text = "\n".join(f"• {d['name']} — ${d['amount']:,.0f} ({d['days']}d)" for d in stale_deals[:5])
blocks.append({"type": "divider"})
blocks.append({"type": "section", "text": {"type": "mrkdwn", "text": f"⚠️ *At Risk (14+ days stale)*\n{stale_text}"}})
blocks.append({"type": "context", "elements": [
{"type": "mrkdwn", "text": f"Generated {datetime.now().strftime('%A, %B %d, %Y')}"}
]})
# --- Post to Slack ---
print("Posting to Slack...")
slack = WebClient(token=SLACK_TOKEN)
result = slack.chat_postMessage(channel=SLACK_CHANNEL, text="Weekly Pipeline Report", blocks=blocks, unfurl_links=False)
print(f"Posted to #{SLACK_CHANNEL}: {result['ts']}")Step 4: Run the skill
Invoke it from your AI coding agent:
# Claude Code
/pipeline-report
# Or run the script directly
python .claude/skills/pipeline-report/scripts/report.pyThe agent will execute the script, and you'll see the output confirming the report was posted.
Step 5: Schedule it (optional)
Option A: Claude Desktop Cowork (scheduled tasks)
If you use Claude Desktop (Cowork), you can schedule this as a recurring task:
- Open Cowork and go to the Schedule tab in the sidebar
- Click + New task
- Set the description to: "Run
/pipeline-reportto generate and post the weekly pipeline report" - Set frequency to Weekly on Monday mornings
- Set the working folder to your project directory
Cowork scheduled tasks only run while your computer is awake and the Claude Desktop app is open. If your machine is asleep on Monday morning, the task runs when the app next opens.
Option B: Cron + CLI
For reliable server-side scheduling, use cron or GitHub Actions to run the script directly:
# crontab -e
0 8 * * 1 cd /path/to/project && python .claude/skills/pipeline-report/scripts/report.pyOption C: GitHub Actions
name: Weekly Pipeline Report
on:
schedule:
- cron: '0 13 * * 1'
workflow_dispatch: {}
jobs:
report:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- run: pip install requests slack_sdk
- run: python .claude/skills/pipeline-report/scripts/report.py
env:
HUBSPOT_TOKEN: ${{ secrets.HUBSPOT_TOKEN }}
SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}
SLACK_CHANNEL_ID: ${{ secrets.SLACK_CHANNEL_ID }}When to use this approach
- You want a report now without setting up n8n, Zapier, or Make
- You're iterating on the format — just edit the script and re-run
- You want ad-hoc variants — "run this for Q1 only" or "show me just Enterprise deals"
- You want the report logic version-controlled alongside your code
- You're already using an AI coding agent and want to extend it with automation
When to graduate to a dedicated tool
- You need reliable scheduling without depending on your machine being awake
- You want visual debugging and execution history
- Multiple non-technical team members need to modify the workflow
- You need error handling and retry logic beyond what a script provides
Because this skill uses the open Agent Skills standard, the same SKILL.md and script work across Claude Code, Cursor, and other compatible tools. The script itself is just Python — it runs anywhere.
Need help implementing this?
We build and optimize automation systems for mid-market businesses. Let's discuss the right approach for your team.