This page explains how Democracy Redline selected its categories, assigned their weights, structured its monthly review process, and defined the relationship between research, editorial judgment, automation, and public publication. Version 1.0 is the current published public model.
Each category is scored from 0 to 10, where higher values indicate greater democratic risk. The overall index is a weighted composite rather than a simple average. The public score is published monthly and is not auto-updated by an intake script.
Strong: final court rulings, statutes, executive orders, federal dockets, official data. Medium: reputable multi-outlet reporting or active litigation without final rulings. Weak: social media, unverified claims, or summaries without primary support.
Curated trackers such as Trump Action Tracker are useful as discovery layers, but they should not move the score by themselves without stronger underlying evidence. Automation can gather candidates for review, but the published score still requires human judgment.
| Category | Weight |
|---|---|
| Election Integrity & Peaceful Transfer | 20% |
| Rule of Law & Court Compliance | 15% |
| Habeas Corpus & Due Process | 10% |
| Coercive State Power & Policing Norms | 10% |
| Political Targeting / Weaponization of Justice | 10% |
| Press Freedom & Information Control | 10% |
| Civil Society & Associational Freedom | 10% |
| Institutional Checks & Anti-Corruption | 10% |
| Military & Intelligence Neutrality | 5% |
The Democracy Redline rubric was built by studying the major institutions and research programs that already measure democratic strength, democratic decline, and authoritarian risk. The project did not attempt to replicate any single global index in miniature. Instead, it identified the institutional pillars that recur across the strongest comparative-democracy frameworks and translated that common logic into a smaller public-warning model that can be applied consistently in a monthly editorial process.
The governing question was straightforward: what has to keep functioning for a liberal democracy to remain a liberal democracy in practice, not just in formal constitutional language? The answer is broader than elections alone. Elections matter, but so do lawful constraint on executive power, due process, civil liberties, press freedom, civil society, anti-corruption safeguards, and the neutrality of coercive institutions. Democracies typically erode across several of those pillars at once. The rubric was therefore designed to make cumulative institutional stress visible rather than waiting for a single headline to carry the whole explanatory burden.
The resulting categories are best understood as a synthesis. They were selected because they capture the recurring democratic failure points identified across the strongest monitoring traditions while remaining narrow enough for disciplined monthly review. In short, the model compresses complexity without pretending complexity does not exist.
The weighting system was built around systemic risk, not around which topics generate the loudest or most emotional news cycle. The working question was: if this pillar is seriously cracked, captured, or eliminated, how much downstream damage can it do to the democratic system as a whole?
That is why election integrity and peaceful transfer carry the highest weight in the current public model. If a country can no longer conduct meaningful elections or honor legitimate outcomes, nearly every other constitutional protection becomes more fragile. Rule of law and court compliance also carry heavy weight because a system that no longer obeys lawful rulings is no longer meaningfully constrained. Due process, civil liberties, information freedom, anti-corruption safeguards, and institutional neutrality are weighted slightly below that not because they are less morally important, but because the rubric is trying to reflect probable downstream consequences for the entire system.
The weighting is therefore a transparent judgment about institutional importance and damage potential. It is not presented as a claim of mathematical perfection. It is a reasoned attempt to rank which democratic failures most quickly alter the operating character of the regime itself.
The project monitors a curated intake of reporting, official documents, legal developments, and institutional signals. Automation is used to collect candidates for review, not to publish score changes automatically.
Incoming items are sorted by source quality, directness of evidence, likely signal type, and likely relevance to one or more categories. Duplicate stories covering the same event should be clustered rather than treated as separate democratic shocks.
Editors decide whether an item belongs in the report, whether it changes the month’s interpretation, and whether it reflects ordinary accumulation or movement toward a redline condition. This is the stage where the model deliberately resists full automation.
The public score, report language, and archive entry are updated only after review. The result is intended to be transparent, disciplined, and repeatable rather than instant.
This methodology is a public-facing synthesis of serious democracy-monitoring logic. It is designed to help citizens, journalists, advocates, collaborators, and potential funders understand how the project translates democratic-risk research into a recurring monthly public-warning process.
It is not a claim that there is one universally agreed scientific moment at which democracy officially dies. Comparative politics does not offer a single magic line for that. What it does offer is a substantial body of work identifying the institutional conditions under which democracies remain resilient or become vulnerable. Democracy Redline adapts that body of work into a smaller weighted framework suitable for monthly review, public explanation, and historical archiving.
The site should therefore be read as an early-warning instrument. It is disciplined by research, constrained by editorial review, and designed for public clarity. It is not a substitute for every larger academic index. It is an attempt to make the consequences of cumulative democratic erosion harder to ignore and easier to discuss responsibly.
These methodological notes are intended to answer a practical set of questions from collaborators, journalists, lawyers, democratic-reform advocates, and prospective funders: how the categories were selected, why the weights look the way they do, what role automation plays, and where editorial judgment begins and ends. The project benefits from outside scrutiny. The method is meant to be transparent enough that critics can see its assumptions and supporters can understand its guardrails.
Future revisions are possible, but they should be published explicitly as versioned updates rather than blended silently into the live score. That is part of the project’s commitment to methodological clarity and historical comparability.
A redline is a threshold event that materially changes the level of democratic danger if credibly met. The score measures accumulation over time. A redline marks a more abrupt constitutional shock.
Some developments do not merely add risk. They change the kind of system you are living in. Open court defiance, politically motivated jailing, criminalized reporting, or override of certified election results can rapidly collapse the practical value of ordinary democratic safeguards.
Each redline is labeled Not Triggered, Watch, or Triggered. “Watch” means there is meaningful evidence of movement toward the threshold. “Triggered” means the site judges that the threshold has been credibly crossed based on the strongest available evidence, even if broader legal or political debates continue afterward. A triggered redline does not automatically produce a 9.0+ score, but multiple simultaneous redline moves can justify a fast score increase even before full systemic breakdown occurs.
The official meter is now the single signature visual. It should appear on the homepage, social graphics, country landing pages, and report covers so people can identify the score at a glance.
The locked template keeps the center score large, adds 0–10 reference numbers, marks the redline boundary at 9.0, and preserves a country tag plus provenance watermark.
To scale internationally, the same structure can swap in a country badge, country name, month, and updated score without losing recognition.