Semarize

Get Your Data

Gong - How to Get Your Conversation Data

A practical guide to getting your conversation data out of Gong - covering API access, historical backfill, incremental polling, webhook-triggered flows, and how to route structured data into your downstream systems.

What you'll learn

  • What conversation data you can extract from Gong - transcripts, metadata, speaker labels, and call context
  • How to access data via the Gong API - authentication, endpoints, and pagination
  • Three extraction patterns: historical backfill, incremental polling, and webhook-triggered
  • How to connect Gong data pipelines to Zapier, n8n, and Make
  • Advanced use cases - custom scoring, CRM enrichment, compliance, and warehouse analytics

Data

What Data You Can Extract From Gong

Gong captures more than just the recording. Every call produces a set of structured assets that can be extracted via API - the transcript itself, speaker identification, timing metadata, and contextual information about the call and its associated deal.

Common fields teams care about

Full transcript text
Speaker labels (rep vs. prospect)
Call owner / rep name
Account and opportunity name
Deal stage at time of call
Call date, time, and duration
Participant list and email addresses
Call direction (inbound / outbound)
Recording availability
Associated CRM record IDs

API Access

How to Get Transcripts via the Gong API

Gong exposes calls and transcripts through a REST API. The workflow is: authenticate with an access key, list calls by date range, then fetch the transcript for each call ID.

1

Authenticate

Gong uses Basic authentication with an access key and secret issued by your Gong admin. Encode the pair as Base64(access_key:secret) and pass it in the Authorization header on every request.

Authorization: Basic Base64(<access_key>:<secret>)
Content-Type: application/json
Your integration user needs the API scope with Read: Calls and Read: Transcripts permissions. Contact your Gong admin to provision credentials if you don't have them.
2

List calls by date range

Call the POST /v2/calls/extensive endpoint with a filter object specifying your date range. Results are paginated - each response includes a records.cursor to fetch the next page.

POST https://api.gong.io/v2/calls/extensive

{
  "filter": {
    "fromDateTime": "2025-01-01T00:00:00Z",
    "toDateTime":   "2025-02-01T00:00:00Z"
  },
  "cursor": null
}

The response returns an array of call objects with id, started, duration, parties, and associated CRM data. Keep paginating until records.cursor is null.

3

Fetch the transcript

For each call ID, request the transcript via POST /v2/calls/transcript. The response contains an array of utterances, each with a speaker ID, timestamp, and text segment.

POST https://api.gong.io/v2/calls/transcript

{
  "filter": {
    "callIds": ["7782342274025937895"]
  }
}

Each utterance in the transcript array includes speakerId, topic, and sentences[] with start/end timestamps and text. Reassemble into plain text by concatenating sentences, or preserve the structured format for per-speaker analysis.

4

Handle rate limits and transcript availability

Rate limits

Gong enforces per-endpoint rate limits that vary by plan. When you receive a 429 response, back off using the Retry-After header. For bulk operations, pace requests at 1–2 per second and persist your pagination cursor between runs.

Transcript timing

Transcripts are not available the instant a call ends. Gong processes recordings asynchronously - typical lag is minutes to hours depending on call length and system load. Build a buffer into your extraction timing or implement a retry with exponential backoff for recently completed calls.

Patterns

Key Extraction Flows

There are three practical patterns for getting transcripts out of Gong. The right choice depends on whether you're doing a one-off migration, running ongoing extraction, or need near real-time processing.

Backfill (Historical Export)

One-off migration of past calls

1

Define your date range - typically 6–12 months of historical calls, or all available data if migrating

2

Call POST /v2/calls/extensive with fromDateTime and toDateTime filters. Paginate through the full result set, collecting all call IDs

3

For each call ID, fetch the transcript via POST /v2/calls/transcript. Pace requests at 1–2 per second to stay within rate limits

4

Store each transcript with its call metadata (call ID, date, participants, deal context) in your data warehouse or object store

5

Once the backfill completes, run your analysis pipeline against the stored data in bulk

Tip: Persist your pagination cursor between batches. If the process is interrupted, you can resume from where you left off instead of re-scanning from the start.

Incremental Polling

Ongoing extraction on a schedule

1

Set a cron job or scheduled trigger (hourly, daily, etc.) that runs your extraction script

2

On each run, call POST /v2/calls/extensive with fromDateTime set to your last successful poll timestamp

3

Fetch transcripts for any new call IDs returned. Use the call ID as a deduplication key to avoid reprocessing

4

Route each transcript and its metadata to your downstream pipeline - analysis tool, warehouse, or automation platform

5

Update your stored cursor / timestamp to the current run time for the next poll cycle

Tip: Account for transcript processing delay. A call that ended 10 minutes ago may not have a transcript yet. Polling with a 1–2 hour lag reduces empty fetches.

Webhook-Triggered

Near real-time on call completion

1

Register a webhook endpoint in your Gong admin settings. Gong fires events when a call is processed and the transcript becomes available

2

When the webhook fires, parse the event payload to extract the call ID and metadata

3

Immediately fetch the transcript via POST /v2/calls/transcript using the call ID from the event

4

Route the transcript and metadata downstream - to your analysis pipeline, CRM updater, or automation tool

Note: Webhook availability varies by Gong plan and tier. Not all accounts have access to webhook triggers. Check with your Gong admin or account rep for your plan's capabilities.

Automation

Send Gong Transcripts to Automation Tools

Once you can extract transcripts from Gong, the next step is routing them through Semarize for structured analysis and into your downstream systems. Below are end-to-end example flows - each showing the full pipeline from Gong trigger through Semarize evaluation to CRM, Slack, or database output.

ZapierNo-code automation

Gong → Zapier → Semarize → CRM

Detect new Gong calls, fetch the transcript, send it to Semarize for structured analysis, then write the scored output - signals, flags, and evidence - directly to your CRM.

Example Zap
Trigger: New Call in Gong
Fires when Gong processes a new call
App: Gong
Event: New Call Completed
Output: call_id, participants
Webhooks by Zapier
Fetch transcript from Gong API
Method: POST
URL: https://api.gong.io/v2/calls/transcript
Auth: Basic (access_key:secret)
Body: { filter: { callIds: [{{call_id}}] } }
Transcript returned
Webhooks by Zapier
POST /v1/runs (sync) to Semarize
Method: POST
URL: https://api.semarize.com/v1/runs
Auth: Bearer smz_live_...
Body: { kit_code, mode: "sync", input: { transcript } }
Structured output returned
Formatter by Zapier
Extract brick values from Semarize response
Extract: bricks.overall_score.value
Extract: bricks.risk_flag.value
Extract: bricks.pain_point.value
Salesforce - Update Record
Write scored signals to Opportunity
Object: Opportunity
AI Score: {{overall_score}}
Risk Flag: {{risk_flag}}
Pain Point: {{pain_point}}

Setup steps

1

Create a new Zap. Choose Gong as the trigger app and select "New Call Completed" as the event. Connect your Gong account.

2

Add a "Webhooks by Zapier" Action (Custom Request) to fetch the transcript from Gong. Set method to POST, URL to https://api.gong.io/v2/calls/transcript, add your Basic auth header, and pass the call_id in the request body.

3

Add a second "Webhooks by Zapier" Action. Set method to POST, URL to https://api.semarize.com/v1/runs. Add your Semarize API key as a Bearer token. In the body, set kit_code to your Kit, mode to "sync", and map the transcript text into input.transcript.

4

Add a Formatter step to extract individual brick values from the Semarize JSON response - overall_score, risk_flag, pain_point, etc.

5

Add a Salesforce (or HubSpot, Sheets, etc.) Action to write the extracted scores and signals to your CRM record.

6

Test each step end-to-end, then turn on the Zap.

Watch out for: Zapier has step data size limits that can truncate very long transcripts. For calls over 60 minutes, consider storing the transcript in cloud storage and passing a reference URL instead of inline text. Use mode: "sync" so Semarize returns results inline - Zapier doesn't natively support polling loops.
Learn more about Zapier automation
n8nSelf-hosted workflows

Gong → n8n → Semarize → Database

Poll Gong for new calls on a schedule, fetch transcripts, send each one to Semarize for analysis, then write the structured scores and signals to your database. n8n's native loop support handles pagination and batch processing.

Example Workflow
Cron - Every Hour
Triggers the workflow on schedule
Mode: Every Hour
Timezone: UTC
HTTP Request - List Calls
POST /v2/calls/extensive (Gong)
Method: POST
URL: https://api.gong.io/v2/calls/extensive
Auth: Basic
Body: { filter: { fromDateTime: {{$now.minus(1, 'hour')}} } }
For each call ID
HTTP Request - Fetch Transcript
POST /v2/calls/transcript (Gong)
Body: { filter: { callIds: [{{$json.id}}] } }
Code - Reassemble Transcript
Concatenate utterances into plain text
Join: sentences[].text by speaker
HTTP Request - Semarize
POST /v1/runs (sync)
URL: https://api.semarize.com/v1/runs
Auth: Bearer smz_live_...
Body: { kit_code, mode: "sync", input: { transcript } }
Scores & signals returned
Postgres - Insert Row
Write structured output to database
Table: call_evaluations
Columns: call_id, score, risk_flag, pain_point

Setup steps

1

Add a Cron node as the workflow trigger. Set the interval to your desired polling frequency (hourly works well for most teams).

2

Add an HTTP Request node to list new calls from Gong. Set method to POST, URL to https://api.gong.io/v2/calls/extensive, configure Basic auth, and set fromDateTime to one interval ago.

3

Add a Split In Batches node to iterate over the returned call IDs. Inside the loop, add an HTTP Request node to fetch each transcript via POST /v2/calls/transcript.

4

Add a Code node (JavaScript) to reassemble the utterances array into a single transcript string. Join each sentence's text, prefixed by speaker name.

5

Add another HTTP Request node to send the transcript to Semarize. Set method to POST, URL to https://api.semarize.com/v1/runs. Add your API key as a Bearer token. Set kit_code, mode to "sync", and map the transcript into input.transcript.

6

Add a Code node to extract the brick values from the Semarize response - overall_score, risk_flag, pain_point, evidence, confidence.

7

Add a Postgres (or MySQL / HTTP Request) node to write the structured output. Use call_id as the primary key for upserts.

8

Activate the workflow. Monitor the first few runs to verify Semarize responses are arriving and writing correctly.

Watch out for: Use call IDs as deduplication keys to prevent reprocessing. You can also use async mode with n8n's native loop - POST /v1/runs (default async), then poll GET /v1/runs/:runId with a Wait + IF loop until status is "succeeded".
Learn more about n8n automation
MakeVisual automation with branching

Gong → Make → Semarize → CRM + Slack

Fetch new Gong transcripts on a schedule, send each to Semarize for structured analysis, then use a Router to branch the scored output - alert on risk flags via Slack and write all signals to your CRM.

Example Scenario
Schedule - Every 30 min
Triggers the scenario on interval
Interval: 30 minutes
HTTP - List New Calls
POST /v2/calls/extensive (Gong)
Method: POST
Auth: Basic
Body: { filter: { fromDateTime: {{formatDate(...)}} } }
HTTP - Fetch Transcript
POST /v2/calls/transcript (Gong, per call)
Iterator: for each call in response
Body: { filter: { callIds: [{{item.id}}] } }
HTTP - Semarize
POST /v1/runs (sync)
URL: https://api.semarize.com/v1/runs
Auth: Bearer smz_live_...
Body: { kit_code, mode: "sync", input: { transcript } }
Structured output
Router - Branch on Risk Flag
Route by Semarize output
Branch 1: IF risk_flag.value = true
Branch 2: ALL (fallthrough)
Branch 1 - Risk detected
Slack - Alert Channel
Notify team about flagged call
Channel: #deal-alerts
Message: Risk on {{call_id}}, score: {{score}}
Branch 2 - All calls
Salesforce - Update Record
Write all scored signals to Opportunity
AI Score: {{overall_score}}
Risk Flag: {{risk_flag}}
Pain Point: {{pain_point}}

Setup steps

1

Create a new Scenario. Add a Schedule module as the trigger, set to your desired interval (15–60 minutes is typical).

2

Add an HTTP module to list new calls from Gong. Set method to POST, URL to https://api.gong.io/v2/calls/extensive, configure Basic auth, and filter by fromDateTime since the last run.

3

Add an Iterator module to loop through each call. For each, add an HTTP module to fetch the transcript via POST /v2/calls/transcript.

4

Add another HTTP module to send the transcript to Semarize. Set URL to https://api.semarize.com/v1/runs, add your Bearer token, and set kit_code, mode to "sync", and input.transcript from the previous step. Parse the response as JSON.

5

Add a Router module. Define Branch 1 with a filter: bricks.risk_flag.value equals true. Leave Branch 2 as a fallthrough (no filter).

6

On Branch 1, add a Slack module to alert your team when risk is detected. Map the score, risk flag, and call ID into the message.

7

On Branch 2, add a Salesforce module to write all brick values (score, risk_flag, pain_point) to the Opportunity record.

8

Set the scenario schedule and activate. Monitor the first few runs in Make's execution log.

Watch out for: Each API call counts as an operation. A scenario processing 50 calls uses ~150 operations (list + transcript + Semarize per call). Use mode: "sync" to avoid needing a polling loop for each run.
Learn more about Make automation

What you can build

What You Can Do With Gong Data in Semarize

Custom grounding, cross-platform scoring, framework testing, and building your own tools on structured conversation signals.

Product Knowledge Accuracy Check

Knowledge-Grounded QA

What Semarize generates

feature_accuracy = 0.72pricing_misstated = truecompetitor_claim_valid = falseknowledge_gap = "enterprise_sso"

Your sales team's pitch deck changed last quarter, but how do you know reps are saying the right things? Run a knowledge-grounded kit against your product documentation. Semarize checks every claim reps make against your source-of-truth docs - feature descriptions, pricing tiers, competitive positioning. When a rep tells a prospect "we support SSO on all plans" but your docs say Enterprise-only, Semarize flags it with evidence. Product marketing reviews the weekly accuracy report and updates enablement materials where gaps appear.

Learn more about QA & Compliance
Product Accuracy ReportGrounded against: Product Docs v4.2
SSO available on all plans
Enterprise only
99.9% uptime SLA
99.9% SLA confirmed
API rate limit is 1000/min
500/min on Pro, 2000/min on Enterprise~
3 claims checked · 1 incorrect · 1 imprecise

Pricing & Packaging Accuracy Audit

Knowledge-Grounded Commercial Verification

What Semarize generates

pricing_tier_correct = falsediscount_authority_exceeded = truepackaging_outdated = truecommercial_risk_level = "high"

Your pricing page changed last quarter, but reps are still quoting old tiers on calls. Run a knowledge-grounded kit against your current rate card and discount authority matrix on every transcript. Each call gets scored for pricing_tier_correct, discount_authority_exceeded, and packaging_outdated. Finance gets a weekly report of commercial risk exposure from pricing errors. The 8% revenue leakage from mis-quoted pricing gets caught before contracts go out.

Learn more about QA & Compliance
Unified Deal Score - Acme Corp
68
GongDiscovery call, Jan 154 signals72
ZoomTechnical demo, Jan 222 signals65
TeamsInternal deal review, Jan 251 signals58
Signal coverage by source
Gong
Zoom
Teams

Methodology A/B Testing

Framework Optimization

What Semarize generates

framework_a_score = 71framework_b_score = 64win_correlation_a = 0.73win_correlation_b = 0.51

Your team is debating whether MEDDICC or your custom qualification framework better predicts closed-won outcomes. Instead of arguing in a meeting, you run both as separate kits against the same 200 calls. Semarize scores every call twice - once with each framework. After correlating scores against actual outcomes in your warehouse, the data shows your custom framework correlates 40% more strongly with wins. The methodology debate is resolved with data, not opinions.

Learn more about Sales Coaching
Framework A/B Test - 200 calls scored
MEDDICC v2WINNER
Avg score71
Win correlation0.73
False positive rate12%
Custom SPICED
Avg score64
Win correlation0.51
False positive rate24%
MEDDICC v2 predicts outcomes 40% more reliably

Custom Messaging Drift Tracker

Structured Enablement Feedback Loop

Vibe-coded

What Semarize generates

approved_messaging_used = falsedrift_category = "value_prop"days_since_enablement = 14drift_severity = 0.71

An enablement lead vibe-codes a Next.js app that runs a messaging accuracy kit grounded against the approved sales playbook. Every Gong call gets checked for whether reps use the approved value propositions, positioning statements, and competitive responses. The app plots messaging drift over time — showing exactly which playbook elements decay fastest after training sessions. Enablement stops guessing which topics need refreshing and gets a live feedback loop measured in days, not quarters.

Learn more about Sales Coaching
Deal ReadinessVibe-coded with Next.js
Acme Corp$85k · Stage 3
Readiness82%
Budget confirmed
Decision maker identified
Pain quantified
Legal review scheduled
Security questionnaire sent
Recommended action: Schedule technical review
12 days since last contact

Watch out for

Common Challenges & Gotchas

These are the issues that come up most often when teams start extracting transcripts from Gong at scale.

Transcript not ready immediately

Gong processes recordings asynchronously. Attempting to fetch a transcript too soon after a call ends will return empty or incomplete data. Build in a delay or retry mechanism.

Permissions and admin scopes

API access requires credentials with the right scope. If your integration user lacks transcript read permissions, requests will fail silently or return partial data.

API rate limits

Exceeding rate limits results in throttled responses. Implement exponential backoff and pace bulk operations to avoid hitting ceilings, especially during backfills.

Pagination and cursors

Call listing endpoints return paginated results. Track your cursor position carefully - losing a cursor mid-backfill means re-scanning from the start or risking missed records.

Speaker label inconsistencies

Speaker identification isn't always perfect. Multiple participants, poor audio, or unregistered users can lead to misattributed utterances. Validate labels before using them for per-speaker analysis.

Large transcript payloads

Long calls produce large JSON payloads that can exceed limits in automation tools. Plan for payload chunking or external storage when working with calls over 60 minutes.

Duplicate processing protection

Without idempotency checks, re-running an extraction flow can process the same call twice. Use call IDs as deduplication keys to ensure each transcript is handled exactly once.

FAQ

Frequently Asked Questions

Explore

Explore Semarize