Skip to content

AI Training Progress

This dashboard provides insights into your organisation's and your personal progress in training the CallCoach AI.

Accessing: Click the AI Training Progress card on the Main Dashboard.

Availability: Accessible to all users (Administrators, Team Leads, and Agents).

Organisation Training Status

  • Current Level — shows your organisation's current AI training level (e.g., "Base Model", "Bespoke L1")
  • Progress to Next Milestone — a progress bar showing total feedback items towards the next training milestone
  • Milestone Tracker — visually displays key milestones and whether they've been achieved
  • Fine-Tuning Status — indicates if a bespoke model has been trained and on how many ratings

Training Levels

Level Ratings Required Description
Base Model Initial CallCoach AI with general capabilities
Bespoke L1 ~100 ratings First level of organisation-specific training
Bespoke L2 ~250 ratings Deeper specialisation
Bespoke L3 ~500 ratings Advanced bespoke training

Rating Balance & Distribution

  • Positive/Negative Ratio — shows the percentage of positive vs. negative ratings, with targets (aim for ~90% positive, ~10% negative)
  • Score Distribution — shows how positive ratings are distributed across different CallCoach report score ranges
  • Weekly Progress — number of ratings collected in the last 7 days

Why 90/10?

This ratio helps the AI learn effectively. Too many negative ratings without enough positive examples makes it hard for the AI to understand what "good" looks like. The target ratio provides balanced training data.

Team Feedback Distribution

A table showing how many ratings have been provided for reports from each team, including the positive/negative ratio and whether a team has an "Optimal Ratio" of feedback.

Note

It's important to ensure balanced feedback across all teams so the AI learns equally from all parts of your organisation.

Personal Achievements

  • Earned Badges — badges unlocked by contributing feedback (e.g., for rating a certain number of reports, providing high-quality comments)
  • Badges to Unlock — badges you're working towards, with progress bars
  • Personal Rating Distribution — how your positive feedback is spread across low, medium, and high-scoring reports
  • Weekly Contributions — tracks your feedback consistency over recent weeks
  • Training Quality Insights — "Do's and Don'ts" for effective AI training

Rate Across Score Ranges

If you only give positive ratings to high-scoring reports, the AI may become biased. Rating across the spectrum helps create a fair and well-rounded AI.

Rating Targets

A configurable tracking system for monitoring rating coverage across teams:

  1. Target Mode — choose how targets are calculated:
    • Per Agent — track ratings for each agent's calls
    • Per Team — track total ratings per team
    • Per User — track how many ratings each user has provided
  2. Target Count — set the weekly target number (e.g., 2 ratings per week)
  3. Team/Role Filters — focus on specific teams or user roles
  4. Summary Tiles — quick overview of overall completion rate and targets met
  5. Weekly Completion Trends — a chart showing rating completion trends
  6. Team Breakdown Table — each team's target, actual ratings, progress bar, and status
  7. Team Drill-Down — click the details button to see agent-level rating coverage
  8. Save Configuration — click Save Config to store your preferred settings

Common Questions

How are badges earned?

Badges are awarded for achieving feedback goals such as rating a specific number of reports, providing feedback consistently over several weeks, giving detailed comments, and rating reports across different score ranges.

My team isn't showing up in Team Feedback Distribution. Why?

This section tracks ratings given to reports originating from a specific team. If a team has few ratings, not many reports from that team's calls/chats have been rated yet. Encourage users to rate reports from all teams.

What's the difference between Per Agent and Per User target modes?

"Per Agent" tracks how many of each agent's calls have been rated (regardless of who rated them). "Per User" tracks how many ratings each dashboard user has provided. Use "Per Agent" for call coverage quality assurance; use "Per User" to monitor individual contribution levels.

See Also