How can we help?

Velaro Metrics Methodology — How Numbers Are Calculated

Velaro Metrics Methodology — How Numbers Are Calculated

Why do numbers sometimes differ between reports?

You may notice that a number on the Live Dashboard does not match the same-sounding number in a historical report. This is intentional, not a bug. Different reports measure different things, and understanding the distinction helps you use each report correctly.

The short version: Live Dashboard = what is happening right now for customers waiting on a human agent. Historical reports = everything that happened over a time period, including bots, email, and tickets.

---

Live Dashboard — what is included and excluded

The Live Dashboard is designed to answer one question: how long are customers waiting for a human agent right now? Every metric is tuned to reflect that reality accurately.

Queue wait time (Longest Wait, Avg Wait)

What IS counted:

  • Conversations that are open, unassigned, not handled by a bot, and on a real-time channel (web chat, SMS, WhatsApp, Facebook, etc.)
  • Only first-time queue entries — conversations that have never been accepted by an agent before

What is NOT counted and why:

| Excluded | Reason |

|---|---|

| Email and Ticket conversations | Email is async — a customer is not sitting watching a timer. Including email would inflate "wait time" with tickets that are hours or days old and nobody is alarmed by that. |

| Bot-active conversations | If a bot is handling the conversation, the customer is not waiting for a human. Counting bot time as wait time would be misleading — bots respond instantly. |

| Re-queued conversations | If a conversation was previously accepted by an agent and then re-queued (agent transferred it back), we do not know the exact re-queue timestamp. Including these would produce inaccurate averages. |

| Conversations waiting longer than 60 minutes | Orphaned or forgotten conversations that were never cleaned up would otherwise skew the maximum wait time dramatically. A 60-minute cap reflects the practical SLA window for any real-time channel. |

Which timestamp is used:

The dashboard uses QueuedAt — the moment the conversation entered the human queue — rather than StartTimestamp (when the conversation first opened). This matters when a bot handled the conversation first: the bot may have spent 3 minutes collecting information before routing to a human. That 3 minutes is bot time, not human wait time.

Open Conversations

All open conversations for the site, including bot-active and async channels. This is a broader count than the queue — a conversation can be open without anyone waiting (e.g., a bot is handling it, or an agent is actively chatting).

In Queue

Only the conversations currently waiting for a human agent: open + unassigned + no active bot + synchronous channel. This is the number that matters for staffing decisions right now.

Bot Active

Conversations currently being handled exclusively by a bot — open, no human agent assigned. These are NOT in the queue because the bot is serving the customer.

Resolved Today / Missed & Abandoned Today

Counts conversations that ended today (after midnight UTC). These reset each day at midnight UTC. If your team operates across time zones, the "today" boundary may not align with your local business day — use the historical reports for timezone-aware analysis.

Avg Handle Time / Avg First Response

Calculated only from conversations resolved today — not from all open conversations. Open conversations have not ended yet, so handle time cannot be computed for them. These numbers will be 0 early in the day if no conversations have resolved yet.

---

Historical Reports — what they measure

Historical reports in the Reports section cover all conversations in the selected date range, including:

  • Async channels (email, tickets)
  • Bot conversations (containment, handoff rates)
  • Re-queued and transferred conversations
  • All time zones (you select your timezone in the report filter)

First Response Time in reports = time from StartTimestamp to FirstResponseTimestamp. This includes the time a bot spent before handoff. This is intentional in historical reports — it reflects the total experience from the customer's perspective.

Wait Time in reports = same as First Response Time in the historical view. It does not use QueuedAt because historical analysis benefits from the full picture, not the filtered live-queue view.

This means: Live Dashboard wait times will almost always be lower than historical wait times for accounts with active bots, because the dashboard excludes bot handling time and the historical report includes it. Neither is wrong — they answer different questions.

---

Why the Live Dashboard and the Reports page show different totals

| Metric | Live Dashboard | Historical Report |

|---|---|---|

| Wait time start | QueuedAt (entered human queue) | StartTimestamp (conversation opened) |

| Includes email / tickets | No | Yes |

| Includes bot-only conversations | No (bot active) | Yes |

| Re-queued conversations | Excluded | Included |

| Timezone | UTC (resets midnight UTC) | Your selected timezone |

| Cap | 60 min max per conversation | No cap |

| "Today" scope | Rolling since midnight UTC | Your date range |

---

Common questions

"The live dashboard shows 2-minute avg wait but our report shows 8-minute avg wait — which is correct?"

Both are correct. The 2-minute figure excludes bot handling time, async channels, and re-queued conversations. The 8-minute figure in the historical report includes all of those. If your bots typically spend 4-6 minutes before routing to a human, that difference is expected and healthy — it means your bots are doing their job.

"We resolved 40 conversations today but the dashboard shows 40 resolved and the report shows 55."

The dashboard counts resolutions since midnight UTC. The historical report uses your local timezone and may include conversations that resolved in what is "yesterday" for UTC but "today" for your team. Use the historical report for any formal reporting.

"Bot Active is 12 but Open Conversations is 18 — where are the other 6?"

The other 6 are assigned to human agents and actively in conversation. Open = all open conversations. Bot Active = subset with no human assigned. In Queue = subset waiting for a human. Assigned = open with a human agent.

"A team shows 0 avg wait but has unassigned conversations — is something wrong?"

Avg wait only counts first-time queue entries on synchronous channels. If the unassigned conversations are email/ticket, or if they were previously accepted and re-queued, they are excluded from the wait calculation. Check the channel type in the conversation list to confirm.

---

Per-report methodology notes

Service Level report

Queue wait time: Uses QueuedAt — the moment the conversation entered the human queue — rather than StartTimestamp. This removes bot and workflow handling time from the wait figure. For conversations created before QueuedAt was tracked, StartTimestamp is used as a fallback.

Service Level %: Reported two ways:

  • Including misses: withinSla ÷ total × 100 — standard contact-centre SL, penalises missed conversations.
  • Answered only: withinSla ÷ serviced × 100 — excludes abandoned/missed from the denominator. Use this if your SLA commitment is "of the chats we answered, X% in under Y seconds."

Neither is wrong — they answer different questions. Industry standard (telecom X of Y) typically uses the "including misses" figure.

Avg handle time: Time from first agent response to conversation end — agent active time only. Does not include the wait before an agent joined.

What is excluded: Test activations (IsActivationDeleted), demo conversations (IsDemoData).

Response Time report

Uses FirstResponseTimestamp - StartTimestamp. This includes the full time from conversation open to first agent reply — including bot handling time if a bot greeted the customer first. This is intentional for historical reporting (it reflects total customer wait experience). The Live Dashboard wait time is lower because it uses QueuedAt.

Missed Chats report

Counts only conversations with status Missed or Abandoned. Excludes test and demo data. The analysis cross-references missed conversations against hourly agent status snapshots to identify root causes (no agents online, all busy, out of schedule, etc.).

Agent Performance report

Only counts conversations with a human agent assigned (AssignedUserId is not null). Bot-only conversations are excluded. Excludes test and demo data.

Sentiment report

Only includes conversations where a sentiment score has been computed and stored. Conversations without a sentiment score are silently excluded from averages — they do not count as "neutral."

Campaign / Email report

Unsubscribe count per campaign is scoped to unsubscribes that occurred after the campaign's send date — not the total lifetime unsubscribe list. This means the per-campaign unsubscribe count reflects attribution to that campaign, not a historical total.

Ticket report

Dates are based on StartTimestamp (ticket creation time). Resolution time = ResolvedAt - StartTimestamp. Excludes test and demo data.

Call and IVR reports

Duration = ring time + talk time combined. Excludes test and demo data.

Schedule Adherence report

Adherence is calculated from Available status only. Time spent Busy (in chats at capacity) or Away is counted as non-adherent even if the agent is actively working. This reflects the standard definition: adherence = being available for new chats when scheduled.

---

What "test data" and "demo data" mean

  • IsActivationDeleted: Conversations from trial or test activations that have since been deactivated. These are real conversations but from accounts that are no longer active.
  • IsDemoData: Synthetic demonstration conversations injected during onboarding. They are not real customer interactions.

All historical reports and the Live Dashboard exclude both categories. If you see a discrepancy and suspect test data, you can verify by checking the conversation list and filtering for the relevant time range.

Was this article helpful?