Get Your Data
Jiminny - How to Get Your Conversation Data
A practical guide to getting your conversation data out of Jiminny — covering REST API access, historical backfill, incremental polling, and how to route structured data into your downstream systems.
What you'll learn
- What conversation data you can extract from Jiminny — transcripts, metadata, speaker labels, coaching data, and CRM context
- How to access data via the Jiminny Customer API — API key authentication, endpoints, and pagination
- Three extraction patterns: historical backfill, incremental polling, and scheduled batch
- How to connect Jiminny data pipelines to Zapier, n8n, and Make
- Advanced use cases — coaching A/B testing, talk track analysis, handoff scoring, and custom dashboards
Data
What Data You Can Extract From Jiminny
Jiminny captures more than just the recording. Every activity produces a set of structured assets that can be extracted via the Customer API — the transcript itself, speaker identification, timing metadata, coaching frameworks, and contextual CRM data associated with the call.
Common fields teams care about
API Access
How to Get Transcripts via the Jiminny API
Jiminny exposes activities and transcripts through a REST API documented at jiminny.github.io/customer-api-docs (Swagger UI). The workflow is: authenticate with an API key, list activities by date range, then fetch the transcript for each activity.
Authenticate
Jiminny uses API Key authentication. Your Jiminny admin generates the key from the admin settings panel. Pass the key in the Authorization header on every request.
Authorization: Bearer <api_key> Content-Type: application/json
List activities by date range
Call the activities endpoint with date range filters. Results are paginated — each response includes a cursor to fetch the next page. Jiminny recommends batches of 500–1000 activities for optimal performance.
GET /v1/activities?from=2025-01-01T00:00:00Z&to=2025-02-01T00:00:00Z&limit=500 Authorization: Bearer <api_key>
The response returns an array of activity objects with id, timestamp, duration, participants, and associated CRM data. Keep paginating until all results are returned.
Fetch the transcript
For each activity ID, request the transcript from the activity detail endpoint. The response contains speaker-labelled utterances with timestamps, plus associated coaching data, action items, topics, and questions.
GET /v1/activities/<activity_id>/transcript Authorization: Bearer <api_key>
Each utterance in the transcript includes speaker, timestamp, and text. Reassemble into plain text by concatenating utterances, or preserve the structured format for per-speaker analysis.
Handle rate limits and recording expiry
Rate limits
Respect the API's rate limits and use the recommended batch sizes of 500–1000 activities. When you receive a rate limit response, back off and retry. For bulk operations, pace requests to avoid hitting ceilings, especially during backfills.
Recording link expiry
Recording links returned by the Jiminny API expire after 24 hours. If you need to retain access to audio or video files, download them within that window. Transcript text does not expire — only the media URLs are time-limited.
Patterns
Key Extraction Flows
There are three practical patterns for getting transcripts out of Jiminny. The right choice depends on whether you're doing a one-off migration, running ongoing extraction, or need scheduled batch processing.
Backfill (Historical Export)
One-off migration of past calls
Define your date range — typically 6–12 months of historical activities, or all available data if migrating
Call the activities endpoint with your date range filters. Use batch sizes of 500–1000 for optimal performance. Paginate through the full result set, collecting all activity IDs
For each activity ID, fetch the transcript via the transcript endpoint. Pace requests to stay within rate limits
Store each transcript with its activity metadata (activity ID, date, participants, CRM context, coaching data) in your data warehouse or object store
Once the backfill completes, run your analysis pipeline against the stored data in bulk
Incremental Polling
Ongoing extraction on a schedule
Set a cron job or scheduled trigger (hourly, daily, etc.) that runs your extraction script
On each run, call the activities endpoint with the from parameter set to your last successful poll timestamp
Fetch transcripts for any new activity IDs returned. Use the activity ID as a deduplication key to avoid reprocessing
Route each transcript and its metadata to your downstream pipeline — analysis tool, warehouse, or automation platform
Update your stored cursor / timestamp to the current run time for the next poll cycle
Scheduled Batch Processing
Daily or weekly bulk extraction and analysis
Set up a scheduled job (daily end-of-day or weekly) that collects all activities from the previous period
Pull activities in batches of 500–1000 using the recommended batch sizes for the Jiminny API
Fetch transcripts, coaching data, action items, and CRM context for each activity in the batch
Route the complete dataset to your analysis pipeline — run Semarize kits in bulk, then write structured output to your warehouse or CRM
Automation
Send Jiminny Transcripts to Automation Tools
Once you can extract transcripts from Jiminny, the next step is routing them through Semarize for structured analysis and into your downstream systems. Below are end-to-end example flows — each showing the full pipeline from Jiminny API through Semarize evaluation to CRM, Slack, or database output.
Jiminny → Zapier → Semarize → CRM
Poll Jiminny for new activities on a schedule, fetch the transcript, send it to Semarize for structured analysis, then write the scored output — signals, flags, and evidence — directly to your CRM.
Setup steps
Create a new Zap. Choose "Schedule by Zapier" as the trigger and set it to run every hour (or your preferred interval).
Add a "Webhooks by Zapier" Action (Custom Request) to list new activities from Jiminny. Set method to GET, URL to the activities endpoint with a from parameter based on last run time, and add your API key as a Bearer token.
Add another "Webhooks by Zapier" Action to fetch the transcript for each activity. Set method to GET and pass the activity ID in the URL.
Add a third "Webhooks by Zapier" Action. Set method to POST, URL to https://api.semarize.com/v1/runs. Add your Semarize API key as a Bearer token. In the body, set kit_code to your Kit, mode to "sync", and map the transcript text into input.transcript.
Add a Formatter step to extract individual brick values from the Semarize JSON response — overall_score, risk_flag, pain_point, etc.
Add a Salesforce (or HubSpot, Sheets, etc.) Action to write the extracted scores and signals to your CRM record.
Test each step end-to-end, then turn on the Zap.
Jiminny → n8n → Semarize → Database
Poll Jiminny for new activities on a schedule, fetch transcripts, send each one to Semarize for analysis, then write the structured scores and signals to your database. n8n's native loop support handles pagination and batch processing.
Setup steps
Add a Cron node as the workflow trigger. Set the interval to your desired polling frequency (hourly works well for most teams).
Add an HTTP Request node to list new activities from Jiminny. Set method to GET, URL to the activities endpoint, configure Bearer auth with your API key, and set the from parameter to one interval ago.
Add a Split In Batches node to iterate over the returned activity IDs. Inside the loop, add an HTTP Request node to fetch each transcript via the transcript endpoint.
Add a Code node (JavaScript) to reassemble the utterances array into a single transcript string. Join each utterance's text, prefixed by speaker name.
Add another HTTP Request node to send the transcript to Semarize. Set method to POST, URL to https://api.semarize.com/v1/runs. Add your API key as a Bearer token. Set kit_code, mode to "sync", and map the transcript into input.transcript.
Add a Code node to extract the brick values from the Semarize response — overall_score, risk_flag, pain_point, evidence, confidence.
Add a Postgres (or MySQL / HTTP Request) node to write the structured output. Use activity_id as the primary key for upserts.
Activate the workflow. Monitor the first few runs to verify Semarize responses are arriving and writing correctly.
Jiminny → Make → Semarize → CRM + Slack
Fetch new Jiminny transcripts on a schedule, send each to Semarize for structured analysis, then use a Router to branch the scored output — alert on risk flags via Slack and write all signals to your CRM.
Setup steps
Create a new Scenario. Add a Schedule module as the trigger, set to your desired interval (15–60 minutes is typical).
Add an HTTP module to list new activities from Jiminny. Set method to GET, URL to the activities endpoint, configure Bearer auth, and filter by from parameter since the last run.
Add an Iterator module to loop through each activity. For each, add an HTTP module to fetch the transcript via the transcript endpoint.
Add another HTTP module to send the transcript to Semarize. Set URL to https://api.semarize.com/v1/runs, add your Bearer token, and set kit_code, mode to "sync", and input.transcript from the previous step. Parse the response as JSON.
Add a Router module. Define Branch 1 with a filter: bricks.risk_flag.value equals true. Leave Branch 2 as a fallthrough (no filter).
On Branch 1, add a Slack module to alert your team when risk is detected. Map the score, risk flag, and activity ID into the message.
On Branch 2, add a Salesforce module to write all brick values (score, risk_flag, pain_point) to the Opportunity record.
Set the scenario schedule and activate. Monitor the first few runs in Make’s execution log.
What you can build
What You Can Do With Jiminny Data in Semarize
Framework A/B testing, cross-rep talk track analysis, handoff quality scoring, and building your own revenue intelligence layer on structured conversation signals.
Coaching Framework A/B Testing
Data-Driven Methodology Selection
What Semarize generates
Your team debates whether Jiminny's built-in MEDDICC coaching framework or your custom qualification framework better predicts closed-won deals. Instead of arguing, you pull 400 call transcripts from Jiminny's API and run both frameworks as separate Semarize kits. Each call gets scored twice. After correlating scores with CRM outcomes in your warehouse, your custom framework correlates 35% more strongly with wins. But MEDDICC scores better on enterprise deals. You deploy both kits — custom for mid-market, MEDDICC for enterprise — and measure the impact quarterly with structured data, not opinions.
Learn more about Sales CoachingCross-Rep Talk Track Effectiveness
Data-Driven Enablement
What Semarize generates
Your 20-person sales team uses different talk tracks for the same product. Every call is recorded in Jiminny — but which talk track actually works? Pull all transcripts and run a talk track evaluation kit. Semarize identifies talk_track_variant (which pitch approach was used), prospect_engagement_response, objection_trigger_rate, and conversion_to_next_step. After scoring 600 calls, the data shows that the "pain-first" talk track has a 2.1x higher conversion than "feature-first" — but only for prospects with fewer than 500 employees. The team adopts segment-specific talk tracks backed by evidence.
Learn more about Data ScienceDeal Handoff Quality Scoring
Handoff Continuity
What Semarize generates
When deals move from SDR to AE, critical context often gets lost. Run the last SDR call and first AE call through a handoff continuity kit. Semarize checks context_carried_forward (did the AE reference pain points from the SDR call?), qualification_gaps_addressed, duplicate_discovery_avoided, and prospect_experience_score. After scoring 150 handoffs, the data reveals that deals where the AE references the SDR’s pain discovery in the first 5 minutes close 45% faster. SDR-to-AE handoff templates get restructured.
Learn more about RevOpsCustom Objection Response Library Builder
Evidence-Backed Playbook Creation
What Semarize generates
A sales enablement manager vibe-codes a Supabase-backed app that runs every Jiminny transcript through an objection extraction kit. Semarize returns objection_type, rep_response_text, response_effectiveness_score, and whether the meeting advanced to next step. After 600 calls, the app has catalogued 340 real objection-response pairs with effectiveness scores. The team builds a data-backed objection playbook: the top-performing response to “budget constraint” converts 84% of the time vs. the current playbook response at 51%. Enablement replaces opinion-based playbooks with evidence-ranked responses.
Learn more about Sales CoachingWatch out for
Common Challenges & Gotchas
These are the issues that come up most often when teams start extracting transcripts from Jiminny at scale.
Recording links expire after 24 hours
Media URLs returned by the Jiminny API are temporary. If your pipeline needs access to the audio or video, download and store the files within 24 hours. Transcript text remains accessible — only the media links expire.
API key management
Jiminny uses admin-generated API keys for authentication. If a key is rotated or revoked, all dependent integrations break. Track which systems use which key, and set up monitoring for auth failures.
Batch size considerations
The API performs best with batch requests of 500–1000 activities at a time. Requesting too many in a single call can lead to timeouts, while too few increases the number of round trips needed for a backfill.
Transcript processing delay
Jiminny processes recordings asynchronously. Attempting to fetch a transcript too soon after a call ends will return empty or incomplete data. Build in a delay or retry mechanism.
Speaker label inconsistencies
Speaker identification isn't always perfect. Multiple participants, poor audio, or unregistered users can lead to misattributed utterances. Validate labels before using them for per-speaker analysis.
Pagination and cursor tracking
Activity listing endpoints return paginated results. Track your cursor position carefully — losing a cursor mid-backfill means re-scanning from the start or risking missed records.
Duplicate processing protection
Without idempotency checks, re-running an extraction flow can process the same call twice. Use activity IDs as deduplication keys to ensure each transcript is handled exactly once.
FAQ
Frequently Asked Questions
Explore