The Problem with Global Thresholds
Most solar monitoring systems flag anomalies by comparing each inverter against a fixed threshold — if output drops below X%, raise an alert. Simple, but broken.
The issue: solar output is heavily influenced by local conditions. Cloud cover, shading, soiling, temperature — these vary across a rooftop. An inverter producing 80% of its rated capacity might be perfectly fine if its neighbors are also at 80%.
The Peer-Relative Approach
Instead of comparing against a global baseline, compare each inverter against its spatial neighbors — inverters with similar orientation, tilt, and shading profile.
def peer_relative_score(inverter_id, df):
peers = get_peers(inverter_id, df)
peer_median = df[df['id'].isin(peers)]['output'].median()
return df[df['id'] == inverter_id]['output'].values[0] / peer_median
An inverter scoring below 0.85 relative to peers is worth investigating. One scoring 0.95 against a global threshold that’s actually 1.10 relative to peers is silently underperforming.
Results at NuriFlex
After deploying this approach across our monitored sites, we reduced false positive alerts by ~60% and caught 3 real underperformance cases in the first month that the old system had been ignoring for weeks.
The key insight: anomaly is relative, not absolute.