Recognized Delegate Compensation for October 2025

As governance facilitator it is our responsibility to measure delegate activity and make recommendations on who qualified for a payout each month.

Inital criteria were simple:

  • voting participation and
  • rationales provided on the forum

We gathered the data in a Google Sheet and found out that multiple qualify with equal numbers, so we needed a tie breaker.

For this month, we chose Total Likes Received on the forum. Likes Received are a - somewhat haphazard, but readily available - metric for contributing to the discourse and the community of the Rootstock Collective.

In future months we would like to use other or multiple tiebreakers to decide, which is something we want to get community feedback on. Please participate in the poll at the bottom of this post, and/or comment to this thread. Shoutout to @Curia for really pushing the envelope in this discussion proactively.

Below is a screenshot of our dashboard, here is a link to the Google Sheet. Please reach out via DM if you feel we have not counted correctly.

We’ll push the delegate compensation payout to an on-chain ratification vote today. The actual payout will currently happen manually, due to limitations for multi-send functionality in the on-chain executables of the polls.

Going forward, which metrics should be used as a tiebreaker for delegate compensation:

  • Likes Received (this month)
  • Time on Forum
  • Number of Delegators
  • Voting Power Delegated
  • Other, please comment
0 voters

We want to keep the delegate compensation system extremely simple so it doesn’t incur a lot of administrative overhead. Any potential tiebreakers have to be quantitative and there should not be more than two total, in our opinion.

Thanks to all the amazing delegates who shape the first Bitcoin L2 DAO and the decentralized future of Bitcoin.

7 Likes

Hi, my delegate address is wrong in this report. correct one is: 0x8297D4A331A519569bC5F785db8CcA68fE53E846

or with RSK derivation path:

0x8297d4a331a519569bc5f785db8cca68fe53e846

I will be opening a Delegate threath ASAP.

2 Likes

Appreciate @rspa_StableLab for this detailed breakdown and for kicking off this important discussion on delegate compensation.

We support the initial criteria (voting + rationales) and the data-driven approach. The need for a fair tie-breaker is clear, and “Likes Received” is a good, available metric and a useful proxy for positive community contribution.

As active delegates ourselves, we believe our primary focus is (and should be) on high-value activities that drive the Collective forward. Right now, as you know, this is overwhelmingly centered on the Grants program.

An effective compensation system should align incentives with our core responsibilities. While “Likes” are a useful measure of positive engagement, they don’t specifically capture the full value of our main job: reviewing grant proposals, providing in-depth feedback, and performing due diligence.

We believe the tie-breaker should be enhanced to more directly reflect this. To build on the current framework, we’d like to suggest a set of metrics that balances general engagement with this critical grant work:

  • Likes received (Grant Proposals): This provide a proxy to measures the quality and community validation of a delegate’s feedback on grants.

  • Comment (Grant Proposals): This directly measures a delegate’s primary role of providing in-depth feedback and performing due diligence.

  • Days visited: This measures consistent, day-to-day presence and awareness of forum activities within the month.

A crucial refinement for any metric using “Likes” (both general and grant-specific) would be to only count likes from forum users at Trust Level 1 or higher. This simple filter would ensure the metric reflects genuine community validation and significantly reduces any potential impact from bot or spam accounts.

Collaborating on the “Simplicity” Goal

This brings us to your most important point: the system must be “extremely simple” with low administrative overhead.

We completely agree. While all this data is publicly available on the forum, manually compiling, filtering, and weighting these metrics for all delegates each month is the exact “administrative overhead” the original post hoped to avoid. This is the central challenge: How do we get an accurate, role-aligned, and bot-resistant system that is also simple to administer?

We believe the solution is to have a simple automation pipeline for this. The facilitators’ time is best spent on other things, not on manual data entry and filtering.

We’ve compiled a table detailing the October performance for current recognized delegates based on our suggested metrics and some other additional metrics.

We think the community should explores adopting an simple automation pipeline for tracking tie-breaker metrics.

This pipeline would automatically track and report on the key metrics the community decides on (like the four we suggested).

This approach achieves all goals:

  1. It’s “extremely simple” to administer: It removes the “administrative overhead” you mentioned. Facilitators would receive an automated report, or this data could feed directly into the current dashboard you’re working on via API.

  2. It’s transparent and verifiable: An automation pipeline is consistent and auditable by anyone. However, community members could just as well verify the data by manually counting it, as it’s mostly publicly available on the forum.

  3. It’s far more accurate: It directly aligns compensation with the delegate’s main role.

  4. It’s more robust: It filters out low-quality/spam engagement.

A key benefit of this automated approach is also its flexibility. As the collective activities evolve and our “main role” shifts, the community can agree to adjust which metrics are tracked by the pipeline.

Furthermore, this opens up much more design space for future iterations. Instead of just a tie-breaker, these automated metrics could be combined into a more comprehensive “Composite Score” with different weight adjustment. This would allow us to move beyond simple pass/fail criteria and proportionally reward all valuable contributions, giving us a much more powerful tool to incentivize the specific behaviors the Collective wants to promote.

We are happy to collaborate with the facilitators and the community to build the MVP for this as a public good. We believe that by working together and leveraging automation, we can build a robust, fair, and simple compensation framework for all delegates.

3 Likes

Thanks for the effort and coordination from @rspa_StableLab and StableLab team and Rootstock team. We will continue to contribute to the healthy and active governance in the Collective.

We agree with this approach. The pipeline should be easily verifiable based on the Web3 ethos and utilize the metrics that are as objective as possible while preventing the spammy engagements to game the system.

3 Likes