Safety IntelligenceFebruary 24, 2026By Patrick Songore, Founder, GangoAI

Why Personal Baselines Matter More Than Population Averages

If you compare every worker against a population average, you will consistently flag some people who are perfectly fine and consistently miss others who are genuinely impaired. The reason is simple: people are not average. Any system that treats them as if they are will produce unreliable results.

The Average Person Does Not Exist

Consider two workers arriving for a shift. One is naturally slow-moving, deliberate, and measured in everything they do. The other is energetic, fast-paced, and restless by nature. A system that compares both against a population average will see the first worker as potentially impaired on every single shift - because their normal behaviour looks like fatigue when measured against the mean. Meanwhile, the second worker could be significantly impaired and still appear normal because their degraded state is above the population average.

This is not a theoretical problem. It is the fundamental flaw in any monitoring system that relies on population-level benchmarks to make individual safety decisions.

What a Personal Baseline Changes

A personal baseline compares each person to themselves. It learns what is normal for that specific individual - not what is normal for the average human. When something deviates from that individual's own pattern, the system detects the change. Not because the person looks different from everyone else, but because they look different from themselves.

This is how experienced supervisors have always worked. A good manager who has worked with someone for years knows when something is off. They are not comparing that person to a textbook definition. They are comparing them to what they know is normal for that person. The challenge is that this intuition does not scale. A supervisor managing 40 people across rotating shifts cannot maintain that level of individual awareness for every person on every day.

A personal baseline system can.

The Fairness Problem

Population-based systems introduce a fairness issue that is rarely discussed. People with disabilities, older workers, people with certain medical conditions, and people who simply move differently from the statistical norm will be disproportionately flagged. Not because they are impaired, but because their natural baseline sits outside the population average.

This creates a system that effectively discriminates against natural human variation. A worker who uses a mobility aid, has a limp from an old injury, or simply moves more slowly than average will accumulate false flags. Over time, this erodes trust in the system and creates legitimate grievances - particularly in environments with union representation.

A personal baseline eliminates this problem entirely. It does not matter if someone naturally moves slowly, quickly, or differently. The system only flags when that person's pattern changes from their own established norm. The worker with the old knee injury is compared to their own baseline, which already accounts for how they normally move.

Building the Baseline

A common question about personal baselines is how long they take to establish. The answer depends on the system, but the principle is consistent: the system needs to observe someone across enough normal conditions to establish what their individual pattern looks like. This means seeing them at the start of a day shift, a night shift, after a weekend, after consecutive working days.

Once established, the baseline is not static. It adapts over time as the individual changes - whether through fitness, ageing, recovery from injury, or any other gradual shift. A system that locks a baseline at day one and never updates it will become inaccurate over time. A system that continuously refines the baseline maintains its accuracy as the person evolves.

The Operational Difference

From an operational perspective, the difference between population and personal baselines shows up in two metrics that matter most: false positive rate and missed detection rate.

A high false positive rate means supervisors learn to ignore the system. If the technology cries wolf often enough, the response becomes routine dismissal rather than genuine investigation. This is worse than having no system at all, because it creates documented evidence that warnings were raised and ignored - a liability nightmare in the event of an incident.

A high missed detection rate means genuinely impaired workers pass through uncaught. The system provides false assurance - the organisation believes it has safety monitoring in place, but the monitoring is not catching the cases that matter.

Personal baselines address both. Fewer false positives because the system understands individual variation. Fewer missed detections because it recognises when an individual deviates from their own norm, even if that deviation would be invisible against a population average.

The Question to Ask

If you are evaluating safety monitoring technology, the question is not "does it detect fatigue?" Almost every provider will say yes. The question is: who is it comparing each worker against? If the answer is a model trained on population data, ask what happens to the workers who fall outside the average. Ask for the false positive rate broken down by demographic. Ask whether the system can distinguish between someone who is naturally slow and someone who has become slower.

If the system compares each person to themselves, those questions answer themselves.

Individual, Not Average

Every person has their own baseline. Every flag is a deviation from what is normal for that individual. Not a guess based on what is normal for everyone else.

Supported by

Innovate UKNVIDIA Inception ProgramTech South West