As governance facilitator it is our responsibility to measure delegate activity and make recommendations on who qualified for a payout each month.
There is one big change to the compensation calculation and one small one.
The big one is that each delegate has one Joker per year, which they can use to offset one missed vote. This month @404Gov and @DAOplomats used their 2025 joker to offset one missed vote in December.
The small change is that instead of Recent Read Time (which covers the last 60 days) we switched to Read Time this month (which covers the last 30d).
The current system for calculating compensation eligibility is designed to be extremely open and avoid lock-in.
A delegate can show up, provide value to the community, contribute to the forum and be eligible for compensation as long as they have:
more than 10k stRIF delegation
90% or more of polls they voted on
90% or more of votes they explain in their forum thread
All delegates who meet these criteria are ranked by forum read time and likes received. The top 3 are Gold Tier delegates this month and receive 1,500 $USDRIF and the next 3 are Silver Tier delegates and receive 1,000 $USDRIF.
The system allows delegates access to compensation very quickly, as exemplified by @Ignas, who received compensation 2 months after joining.
Below is a screenshot of this month’s dashboard, here is a link to the Google Sheet. Please reach out via DM if you feel we have not counted correctly.
We’ll push the delegate compensation payout to an on-chain ratification vote shortly. The actual payout will currently happen manually, due to limitations for multi-send functionality in the on-chain executables of the polls.
Thanks @Raphael_Anode for facilitating this and providing the update. Also, a warm welcome to @CodeKnight it’s great to see new delegates joining the Rootstock collective!
As the Rootstock Collective grows, we’ve been thinking about how to improve the tracking and evaluation of delegate contributions beyond general forum metrics. To help with this, we’ve been experimenting with a Peer Recognition Score (PRS) to add a layer of “quality assessment” by weighting forum interactions.
How it works:
The goal isn’t to replace current metrics, but to provide a deeper look at high-signal engagement:
Weighted Authority: Likes from other delegates and proposal authors carry more weight than those from unverified accounts.
Thread Normalization: The system identifies the most impactful comments within a specific discussion to reward quality over volume.
Contribution Focus: The formula focuses on a delegate’s top contributions within each thread to prevent “quantity over quality” bias.
We’ve put together a demo and a breakdown of the logic for the community to explore:
We’d love to hear if you think this type of peer-based quality scoring could be a helpful complement benchmark to the current Gold/Silver tier system.
Hey @Curia — thank you for sharing this and for continuing to explore meaningful ways to improve and complement the Collective’s delegate compensation system.
I understand why your data science approach to measuring delegate contributions is appealing, particularly coming from a team that specializes in governance analytics and tooling. That said, I’m not convinced that a data-science–driven approach built on statistical ranking frameworks is the most appropriate fit for a delegate rewards system, where clarity, predictability, and shared understanding are especially important.
My concern is that this system may be legible primarily to its designers, rather than to the delegates it is meant to serve.
With delegate rewards, we ultimately get what we choose to measure and incentivize. This proposal attempts to measure the impact of contributions, which is a worthwhile goal. However, if we look more closely at the role of delegates in Rootstock today, their primary responsibility is evaluating grant proposals. Rootstock is in a growth phase focused on funding builders, and delegate attention should be centered on that work.
The quality of delegate comments certainly matters—especially when comments surface risks, improve proposals, or meaningfully influence how other delegates think or vote. That said, I’m not convinced the proposed method captures that kind of impact. Instead, it infers influence through a weighted “likes” mechanism based on the role of the person liking a comment, rather than identifying whether a contribution meaningfully shaped deliberation or outcomes.
Do you have ideas for how a statistical approach could better capture which comments meaningfully influenced deliberation or proposal outcomes?
I appreciate the effort that went into developing this.
It’s a good prompt to consider how to reward community members and especially delegates.
I especially like the way thay peer recognition is counted.
I would need a bit more research and adversarial thinking to fully understand the edge cases of what is incentivized here, if this would influence payouts.
Imo “yapping” protocols auch as Kaito have been a net negative for the space. So great care has to be taken before deploying quantitative measures of participation.
Appreciate @Axia and @Raphael_Anode for the thoughtful feedback. Just to clarify the goal here: we’re trying to explore a way to capture the quality of delegate forum contributions where the parameters are agreed upon by the community, so delegates have full transparency and can always understand their own signal.
We agree that it is ideal to have a scoring system that is both legible and predictable. However, when the goal is to evaluate contribution quality, strict mechanical predictability can be misleading and much easier to game. In the context of grant-focused discussions, we believe “quality” becomes legible through peer evaluation, and we designed the PRS around this reality.
Rather than measuring activity that is easy to pre-optimize for, it formalizes peer feedback so delegates can focus on making contributions their peers and authors actually find useful. The intent is to align the system’s parameters with shared community judgment by keeping the logic socially grounded—focusing on who found a contribution valuable and in what context.
We completely agree with your point that:
However, the challenge is that measuring “impact” in this context usually requires a manual review by a facilitator, which doesn’t scale as the Collective grows. The PRS doesn’t try to “prove” a comment caused a specific outcome; instead, it recognizes that at scale, meaningful influence is best reflected in whether a contribution mattered to the people involved. We use two signals as proxies for this:
Weighted Peer Validation: When other delegates acknowledge a comment, it serves as a proxy for value - meaning it helped surface a risk, clarified a trade-off, or added a perspective they hadn’t considered. This captures the “deliberation impact” you mentioned.
Author-Acknowledged Bonus: When a proposal author (the builder) engages with a comment, it’s a strong sign the feedback was directly useful to the proposal itself.
This isn’t a perfect science, but it reflects how influence already shows up in practice through peer judgment during deliberation without requiring a manual bottleneck.
We fully agree. Systems that reward activity for its own sake have been a net negative in many communities. That’s why we structured the PRS to de-emphasize volume and spams:
The “Best of” Filter: To prevent delegates from gaming the system by posting dozens of low-value comments, the formula only pulls the top 2 (or more, TBD) highest-scoring comments for any given proposal. Even if a delegate posts 50 times in one thread, only their 2 most impactful contributions (as judged by peers) count. This effectively “caps” the benefit of participation, forcing a focus on high-signal insights rather than filler.
Quality Thresholds: We further de-emphasize volume across the forum threads by only counting proposal scores that meet a “standout” quality threshold (e.g., 60/100). We then apply diminishing weights to each subsequent proposal score. This means a delegate who provides 3 exceptionally high-value insights across the month will likely outrank a delegate who provides 10 average ones, ensuring that “doing more” never outweighs "doing better.”
We see this as an evolving experiment and, if there is interest, we would love to walk the community through a demo of the PRS during the next community call. This will be the best space to show how forum interactions are translated into signal and to gather feedback on whether the results align with our collective intuition of value.
Indeed, these kind of quantitative measure needs safeguards, hence we’d like to collaborate on a “Shadow Period” following the demo. Running the PRS in a non-binding environment alongside the current system will allow us to identify any perverse incentives or adversarial behavior before it ever impacts actual payouts.
I really appreciate the thoughtful initiative and care on creating a better, weighted metric.
I’d have a concern though, on the incentives this might create, for farming likes.
Subtle things, not completely ill intended. But long term, incentives have a tendency to slowly steer behavior and form habits.
Things like e.g., if I were to reply here that I echo @Axia and @Raphael_Anode’s concerns about this subject.
Or having a tendency to be optimistic/supportive in order to get more likes from the OP.
Or a little hack: always posting in the topic when I’m voting in favor, to get an insta-like from the OP.
That is a very fair concern, @ChronoTrigger. “Soft gaming” like echoing popular opinions or pandering to authors is exactly what we want to avoid. Our goal is for the system to act as a filter that makes the current system (which already uses raw likes + read time) harder to game, rather than easier.
We’ve integrated these three safeguards to ensure the data remains high-signal (some of which are already active in the demo):
“Echo” Posts (Existing): Currently, a simple “I echo/agree” post that received likes counts toward a delegate’s raw metrics. In PRS, comments are scored relative to the highest-quality post in that specific thread. If a comment doesn’t meet a “quality threshold” (e.g., 60%), it contributes zero to the monthly score.
Capped Signal (Existing): We only count a delegate’s top 1–2 comments per proposal. Once you’ve provided your core insight for a thread, there is no marginal benefit to “farming” more likes or posting fluff.
Pandering post (refining): To prevent “insta-likes” from authors, we can significantly reduce the weight of an author’s like relative to a peer delegate’s validation. We can also prioritize utility signals (such as an author replying to or quoting a delegate’s feedback) over a simple like.
We don’t see this as a perfect system, but it is an improvement over current metrics like raw likes and read times, which are quite gameable. We are currently running this logic in parallel with the current system to calibrate these weights and ensure the results align with our collective intuition.
Transparency is key to making this work, so we are focused on simplifying the model to make it easier to digest. The goal is for every delegate to clearly understand that low-effort activity doesn’t translate into a higher score. When the “rules” are clear, it naturally encourages the kind of high-quality work that helps the Collective move forward. Thanks again for your feedback!