Snapshot your HubSpot pipeline weekly in Google Sheets using n8n
Install this workflow
Download the n8n workflow JSON and import it into your n8n instance.
pipeline-snapshot.n8n.jsonPrerequisites
- n8n instance — either n8n cloud or self-hosted
- HubSpot private app token with
crm.objects.deals.readandcrm.schemas.deals.readscopes - Google account with a Google Sheet prepared for snapshots
- n8n credentials set up for both HubSpot and Google Sheets (OAuth2)
Why n8n?
n8n is the best option for pipeline snapshots because the Code node handles all the aggregation logic in a few lines of JavaScript, and the Google Sheets node appends rows natively without needing API credentials. The visual workflow makes it easy to add stages or metrics later. Self-hosted n8n runs this for free, and even cloud n8n uses just 4 executions per month for a weekly snapshot.
How it works
- Schedule Trigger fires every Monday at 8 AM
- HTTP Request node fetches pipeline stage definitions from HubSpot
- HTTP Request node searches for all active deals with pagination
- Code node aggregates total value, deal count, and per-stage breakdowns into a single row
- Google Sheets node appends the snapshot row to your spreadsheet
Step 1: Prepare the Google Sheet
Create a new Google Sheet and set up headers in the first row:
Date | Total Value | Deal Count | Appointment Scheduled (Count) | Appointment Scheduled (Value) | Qualified (Count) | Qualified (Value) | Proposal (Count) | Proposal (Value) | Closed Won (Count) | Closed Won (Value)Adjust the stage columns to match your actual HubSpot pipeline stages. The automation will append a new row each week with the snapshot data.
Keep the stage columns in the same order as your HubSpot pipeline. If you add or rename stages later, update the headers and the Code node mapping to match.
Step 2: Create the workflow and schedule trigger
Open n8n and create a new workflow. Add a Schedule Trigger node:
- Trigger interval: Weeks
- Day of week: Monday
- Hour: 8
- Minute: 0
- Timezone: Set to your team's timezone (n8n cloud defaults to UTC)
Step 3: Fetch deal stages from HubSpot
Add an HTTP Request node to get stage ID-to-label mappings:
- Method: GET
- URL:
https://api.hubapi.com/crm/v3/pipelines/deals - Authentication: Predefined Credential Type → HubSpot API
This returns all pipelines and their stages. You'll use the id and label from results[].stages[] to translate stage IDs in the deal data.
Step 4: Search for active deals
Add a second HTTP Request node to pull all deals using the HubSpot Search API:
- Method: POST
- URL:
https://api.hubapi.com/crm/v3/objects/deals/search - Body content type: JSON
- Body:
{
"filterGroups": [
{
"filters": [
{
"propertyName": "pipeline",
"operator": "EQ",
"value": "default"
}
]
}
],
"properties": [
"dealname", "amount", "dealstage", "pipeline",
"closedate", "createdate"
],
"sorts": [
{ "propertyName": "amount", "direction": "DESCENDING" }
],
"limit": 100
}The HubSpot Search API returns a max of 100 results per request. If your pipeline has more than 100 active deals, add an IF node that checks for $json.paging.next.after and loops back to the HTTP Request with the after parameter set.
Step 5: Aggregate metrics with a Code node
Add a Code node (set to "Run Once for All Items") to compute the snapshot row:
const deals = $input.all().map(item => item.json);
const results = deals[0]?.results || deals;
const pipelineData = $('Fetch Stages').first().json.results;
// Build stage name lookup
const stageMap = {};
const stageOrder = [];
for (const pipeline of pipelineData) {
for (const stage of pipeline.stages) {
stageMap[stage.id] = stage.label;
stageOrder.push(stage.label);
}
}
// Calculate metrics
let totalValue = 0;
const byStage = {};
for (const deal of results) {
const amount = parseFloat(deal.properties.amount || '0');
totalValue += amount;
const stageId = deal.properties.dealstage;
const stageName = stageMap[stageId] || stageId;
if (!byStage[stageName]) {
byStage[stageName] = { count: 0, value: 0 };
}
byStage[stageName].count += 1;
byStage[stageName].value += amount;
}
// Build the row in stage order
const today = new Date().toISOString().split('T')[0];
const row = [today, totalValue, results.length];
for (const stageName of stageOrder) {
const data = byStage[stageName] || { count: 0, value: 0 };
row.push(data.count, data.value);
}
return [{ json: { row, totalValue, dealCount: results.length, byStage } }];The Code node must return an array of objects with a json property. Use $('Node Name').first().json to reference output from other nodes by name.
Step 6: Append the row to Google Sheets
Add a Google Sheets node:
- Resource: Sheet Within Document
- Operation: Append Row
- Document: Select your snapshot spreadsheet
- Sheet: Select the sheet tab
- Mapping: Map each column from the
rowarray output of the Code node
Alternatively, if you want to pass the full row array at once, use an HTTP Request node to call the Sheets API directly:
- Method: POST
- URL:
https://sheets.googleapis.com/v4/spreadsheets/YOUR_SPREADSHEET_ID/values/Sheet1!A1:append?valueInputOption=USER_ENTERED - Authentication: Google OAuth2
- Body:
{
"values": [{{ $json.row }}]
}The Google Sheets node's "Append Row" operation finds the first empty row after your header and inserts the data there. If you have blank rows in the middle of your data, it may insert above them. Keep your data contiguous to avoid surprises.
Step 7: Add error handling
- In each HTTP Request node, enable Settings → Retry On Fail with 2 retries and 5 second wait
- Create a separate Error Workflow with an Error Trigger node that sends you a notification when the main workflow fails
- In the main workflow, go to Settings → Error Workflow and select your error workflow
Step 8: Test and activate
- Click Execute Workflow to run the full workflow manually
- Check the Code node output — verify the
rowarray has the right number of columns - Open Google Sheets and confirm a new row was appended with today's date and metrics
- Toggle the workflow to Active so the Schedule Trigger fires every Monday
Troubleshooting
Common questions
How many n8n executions does this use per month?
4 executions per month (one per Monday). Each execution runs through all nodes but counts as a single execution. On n8n Cloud Starter ($24/mo, 2,500 executions), this uses 0.16% of your quota.
What if my pipeline has more than 100 deals?
Add a Loop node after the deals search that checks for paging.next.after in the response and fetches additional pages. The Code node can handle all pages once they're collected.
Can I snapshot multiple pipelines?
Yes. Duplicate the HTTP Request node for each pipeline (changing the pipeline filter value) and merge the results in the Code node. Add separate stage columns per pipeline in your Google Sheet.
Cost and maintenance
- n8n cloud: starts at $24/mo for the Starter plan (2,500 executions/month). A weekly snapshot uses ~4 executions/month. Self-hosted n8n is free.
- Google Sheets API: free within Google's standard quotas (500 requests per 100 seconds per project).
- Maintenance: minimal once running. Update the Code node stage mapping if you rename pipeline stages in HubSpot. Monitor the error workflow for failures.
Next steps
Once the basic snapshot is running, consider adding:
- Week-over-week deltas — read the previous row from Google Sheets in the Code node and calculate percentage changes
- Weighted pipeline — multiply each stage's value by its historical close rate for a weighted total
- Chart automation — use Google Sheets' built-in charting on the snapshot data to track trends visually
- Slack notification — add a Slack node at the end to confirm the snapshot was recorded, with a link to the Sheet
Looking to scale your AI operations?
We build and optimize automation systems for mid-market businesses. Let's discuss the right approach for your team.