Recognized Delegate Compensation for October 2025

As governance facilitator it is our responsibility to measure delegate activity and make recommendations on who qualified for a payout each month.

Inital criteria were simple:

  • voting participation and
  • rationales provided on the forum

We gathered the data in a Google Sheet and found out that multiple qualify with equal numbers, so we needed a tie breaker.

For this month, we chose Total Likes Received on the forum. Likes Received are a - somewhat haphazard, but readily available - metric for contributing to the discourse and the community of the Rootstock Collective.

In future months we would like to use other or multiple tiebreakers to decide, which is something we want to get community feedback on. Please participate in the poll at the bottom of this post, and/or comment to this thread. Shoutout to @Curia for really pushing the envelope in this discussion proactively.

Below is a screenshot of our dashboard, here is a link to the Google Sheet. Please reach out via DM if you feel we have not counted correctly.

We’ll push the delegate compensation payout to an on-chain ratification vote today. The actual payout will currently happen manually, due to limitations for multi-send functionality in the on-chain executables of the polls.

Going forward, which metrics should be used as a tiebreaker for delegate compensation:

  • Likes Received (this month)
  • Time on Forum
  • Number of Delegators
  • Voting Power Delegated
  • Other, please comment
0 voters

We want to keep the delegate compensation system extremely simple so it doesn’t incur a lot of administrative overhead. Any potential tiebreakers have to be quantitative and there should not be more than two total, in our opinion.

Thanks to all the amazing delegates who shape the first Bitcoin L2 DAO and the decentralized future of Bitcoin.

8 Likes

Hi, my delegate address is wrong in this report. correct one is: 0x8297D4A331A519569bC5F785db8CcA68fE53E846

or with RSK derivation path:

0x8297d4a331a519569bc5f785db8cca68fe53e846

I will be opening a Delegate threath ASAP.

2 Likes

Appreciate @Raphael_Anode for this detailed breakdown and for kicking off this important discussion on delegate compensation.

We support the initial criteria (voting + rationales) and the data-driven approach. The need for a fair tie-breaker is clear, and “Likes Received” is a good, available metric and a useful proxy for positive community contribution.

As active delegates ourselves, we believe our primary focus is (and should be) on high-value activities that drive the Collective forward. Right now, as you know, this is overwhelmingly centered on the Grants program.

An effective compensation system should align incentives with our core responsibilities. While “Likes” are a useful measure of positive engagement, they don’t specifically capture the full value of our main job: reviewing grant proposals, providing in-depth feedback, and performing due diligence.

We believe the tie-breaker should be enhanced to more directly reflect this. To build on the current framework, we’d like to suggest a set of metrics that balances general engagement with this critical grant work:

  • Likes received (Grant Proposals): This provide a proxy to measures the quality and community validation of a delegate’s feedback on grants.

  • Comment (Grant Proposals): This directly measures a delegate’s primary role of providing in-depth feedback and performing due diligence.

  • Days visited: This measures consistent, day-to-day presence and awareness of forum activities within the month.

A crucial refinement for any metric using “Likes” (both general and grant-specific) would be to only count likes from forum users at Trust Level 1 or higher. This simple filter would ensure the metric reflects genuine community validation and significantly reduces any potential impact from bot or spam accounts.

Collaborating on the “Simplicity” Goal

This brings us to your most important point: the system must be “extremely simple” with low administrative overhead.

We completely agree. While all this data is publicly available on the forum, manually compiling, filtering, and weighting these metrics for all delegates each month is the exact “administrative overhead” the original post hoped to avoid. This is the central challenge: How do we get an accurate, role-aligned, and bot-resistant system that is also simple to administer?

We believe the solution is to have a simple automation pipeline for this. The facilitators’ time is best spent on other things, not on manual data entry and filtering.

We’ve compiled a table detailing the October performance for current recognized delegates based on our suggested metrics and some other additional metrics.

We think the community should explores adopting an simple automation pipeline for tracking tie-breaker metrics.

This pipeline would automatically track and report on the key metrics the community decides on (like the four we suggested).

This approach achieves all goals:

  1. It’s “extremely simple” to administer: It removes the “administrative overhead” you mentioned. Facilitators would receive an automated report, or this data could feed directly into the current dashboard you’re working on via API.

  2. It’s transparent and verifiable: An automation pipeline is consistent and auditable by anyone. However, community members could just as well verify the data by manually counting it, as it’s mostly publicly available on the forum.

  3. It’s far more accurate: It directly aligns compensation with the delegate’s main role.

  4. It’s more robust: It filters out low-quality/spam engagement.

A key benefit of this automated approach is also its flexibility. As the collective activities evolve and our “main role” shifts, the community can agree to adjust which metrics are tracked by the pipeline.

Furthermore, this opens up much more design space for future iterations. Instead of just a tie-breaker, these automated metrics could be combined into a more comprehensive “Composite Score” with different weight adjustment. This would allow us to move beyond simple pass/fail criteria and proportionally reward all valuable contributions, giving us a much more powerful tool to incentivize the specific behaviors the Collective wants to promote.

We are happy to collaborate with the facilitators and the community to build the MVP for this as a public good. We believe that by working together and leveraging automation, we can build a robust, fair, and simple compensation framework for all delegates.

3 Likes

Thanks for the effort and coordination from @Raphael_Anode and StableLab team and Rootstock team. We will continue to contribute to the healthy and active governance in the Collective.

We agree with this approach. The pipeline should be easily verifiable based on the Web3 ethos and utilize the metrics that are as objective as possible while preventing the spammy engagements to game the system.

3 Likes

Thanks for the proposal.

I agree with using voting participation and forum rationales as core metrics, these are essential for ensuring an active and healthy governance process.

However, using Likes Received and Time on Forum as tiebreaker s could create a disadvantage for newer delegates who may not have as much time or presence on the forum compared to more established participants.

Also I’d suggest adding additional optional metrics, such as engagement on X or posting proposal updates and Rootstock news to help increase community awareness. These can also reflect meaningful contributions outside the forum.

One question: could you please confirm what criteria will be used for November @ rspa_StableLab?

I’m wondering if there’s still room for newer delegates like myself to qualify.

3 Likes

Hey Ignas!

Thanks for chiming in!

We’ll be using likes received this month, and replies posted this month as a tie breaker this time around.

We’re also looking into deeper analytics.

Of course like received total favored older accounts, we did that on purpose to reward the early adopters. But as the motto goes, we’re all “still early”!

2 Likes

Hey Ignas,

I speak not in an official capacity, but as another fellow delegate.
I think the arrival of more committed and engaged delegates is healthy for the DAO, and we all have to value that.
That said, I think we also have to structure this in a way to value long-term participation, and long-term committed and engaged delegates, otherwise we’ll end up attracting compensation farmers, using AI to half-ass repplies and rationales.

3 Likes

Raph,
We’d suggest not using replies posted as a criteria for the subsequent months, as it creates an incentive for excessive, more superficial, less in-depth posting.

Likes received, on the other hand, in theory should capture and value the quality and pertinence of each post.

While I do understand the desire for simplicity and easy, immediate auditability, I think a scoring, based on 3-5 metrics (e.g., likes, delegated RIF, time on forum) would be more precise.

2 Likes

I agree that rewarding replies creates perverse incentives. It’s basically only feasible if you have a “quality” measurement attached, which will always be subjective. We’re not keen on going there.

We’d rather change things around until we find something solid.

For November it’s likes received this month and Recent Read Time (which is 60d read time).

Read time is super hard to fake, Discourse put a lot of thought into how they measure it.

1 Like

While we agree that Read Time would be a good metric that is hard to fake, only putting time into the forum doesn’t necessarily contribute to the governance activities. Only utilizing this metric as a definitive tie-breaker criterion would be an issue in the future.

Likes Received would be also somewhat easy to game, and possibly not-intentionally inflated e.g. when team members like their team’s comments.

We believe we still need to consider a possible approach to have a simple automated system for the tie-breaker metrics.

(Also, nitpicking on how we discuss this going forward, we should have another thread to separate the related discussions from the results for 2025 October comp)

1 Like