Monitoring democratic institutions through public records

← Back to overview

Methodology

Overview#

Democracy Monitor is an open-source system that tracks signs of executive-power centralization across U.S. government institutions. It reads publicly available government documents — federal regulations, court filings, press releases, legislative reports — and uses AI content assessment as its primary detection method, supported by three descriptive context methods, to identify when institutional norms may be shifting.

The system is designed to surface patterns worth human examination, not render definitive judgments. All assessments trace to specific documents, reproducible metrics, and published thresholds.

Detection Architecture#

Democracy Monitor uses one active detection method (AI document review) that drives concern status, plus three descriptive context methods that provide narrative grounding without influencing the status determination.

AI Content Assessment (sole active detection) — Two-pass AI review using different providers (OpenAI for screening, Anthropic for detailed review) to ensure epistemic independence. Both passes receive up to 8,000 characters of boilerplate-stripped content. Pass 2 also receives week-level context including peer document titles and flag rate trajectory. Documents are classified from routine to clearly concerning.

Silence Detection (descriptive only) — Measures whether government-controlled sources have gone unusually quiet while independent-branch sources (courts, Congress) remain active. Uses an 8-week intra-administration rolling window. Provides narrative context but does not drive concern status.

Structural Anomaly (descriptive only) — Deterministic, metadata-only analysis across six dimensions: volume, type composition, functional distribution, agency activity, publication tempo, and source convergence. Provides context for narratives.

Thematic Drift (descriptive only) — Embedding-based analysis detecting topic shifts using an 8-week intra-administration rolling window. Provides research context.

Concern Synthesis#

AI document review is the primary active detection method driving concern status. Structural anomaly, silence detection, and thematic drift provide descriptive context.

StableAI content assessment within baseline range. No concerns detected.
ElevatedAI two-pass review flags anomalous content with Pass 2 corroboration.
Confirmed ConcernAI content assessment elevated with high Pass 2 concern rate (>20%). Warrants close examination.

Limitations#

  • Structural anomaly detection identifies statistical departures from baselines — it cannot determine whether a departure is concerning or benign without additional context
  • AI assessment quality depends on the models used; the two-pass design mitigates single-provider bias but cannot eliminate it
  • Thematic drift requires sufficient document volume; categories with few documents per week may have noisy drift signals
  • The system monitors publicly available information only — actions taken through informal channels or not publicly documented are invisible
  • All assessments are automated indicators, not definitive judgments — they are designed to surface patterns worth human examination