Semarize

Bricks

Modular semantic
evaluation units

A Brick evaluates one clearly defined concept inside a conversation and returns structured data. It doesn't summarise - it extracts.

How bricks work

One check.
One structured output.

Each Brick uses LLM semantic reasoning against a defined rubric. It returns a typed, deterministic value - not a paragraph.

A Brick does

Evaluate one semantic concept per conversation
Apply your defined rubric or evaluation logic
Return deterministic, typed output (boolean, score, enum, string_list)
Attach evidence spans showing what text supported the result
Run consistently across calls, emails, and chats

A Brick does not

Summarise conversations in prose
Produce narrative explanations
Score an entire call with one number
Require audio or video input
Need ML engineering to create

Output types

Four output types.
All machine-readable.

Every Brick returns one of four typed outputs. All are JSON-serialisable, queryable, and suitable for automation.

Boolean

True or false. Was this thing present or not?

next_step_confirmed: true
budget_discussed: false
legal_blocker_flag: true

Numeric / Score

A number or 0–100 score. How much of this thing happened?

discovery_score: 0.64
objection_handling_score: 72
open_questions_asked: 6

Categorical / Enum

A classification from a defined set. What kind of thing happened?

risk_level: "high"
objection_type: "pricing"
sentiment: "cautious"

String List

An ordered array of strings. Which items were detected?

competitors_mentioned: ["Gong", "Chorus"]
stakeholders_named: ["CFO", "Legal"]
product_gaps_mentioned: ["SSO", "Rate limits"]

Examples

A library of semantic checks

Bricks can evaluate anything expressible as a semantic check. Here are examples across common use cases.

next_step_confirmed
boolean

Confirms a clear next action and owner were agreed

true
stakeholders_identified
score 0–100

Detects roles and decision makers mentioned

82
pain_is_specific
score 0–100

Checks for measurable pain, not vague interest

64
timeline_mentioned
string_list

Finds timing signals and urgency cues

["Q2 2026"]
pricing_discussed
boolean + evidence

Detects pricing talk and captures context

true
objections_raised
string_list

Identifies objection themes mentioned

["security", "pricing"]
competitor_mentioned
string_list

Identifies competitor names mentioned

["Gusto"]
budget_confirmed
boolean

Detects explicit budget confirmation or commitment

false
decision_maker_present
boolean

Checks whether the economic buyer was on the call

true
meddicc_decision_criteria
score 0–100

Evaluates whether decision criteria were explored

45
talk_ratio
numeric

Calculates speaker balance between rep and prospect

0.62
agenda_set
boolean

Checks if the rep set an agenda within the first 5 minutes

false

Why bricks

Lego, not concrete.

Traditional systems bundle evaluation into one monolithic score. Bricks preserve nuance by breaking evaluation into composable, measurable primitives.

Reusable across Kits

Build a Brick once, use it in multiple Kits. next_step_confirmed works in Discovery, Forecast Risk, and Deal Hygiene Kits.

Independently versioned

Update a Brick's rubric without touching others. A/B test evaluation logic. Freeze versions for compliance.

Shareable across teams

RevOps, Enablement, and Analytics can share the same Bricks while assembling different Kits for their workflows.

Traditional call scoring

"Communication and Engagement: 75"
Weighted scorecard with narrative explanations
Optimised for human review
Collapses nuance into one metric

Semarize Bricks

interruption_count: 4
agenda_set_within_5_minutes: false
open_questions_asked: 6
talk_ratio: 0.62

Build your first Brick.
Get structured signals back.

No ML engineering needed. Define what you want to evaluate, and Semarize handles the rest.