Introducing Recognized Delegate Compensation šŸ¦

Hey all, in the interest of keeping this discussion organized, I have moved it to a separate forum post here.

1 Like

MARCH 2026 — Results Are In.

The Recognized Delegate Compensation Program keeps printing. Another month of delegates showing up, voting with conviction, and moving Rootstock’s decentralization flywheel forward.

Here are March’s top engaged delegates:

:1st_place_medal: Gold Tier | 1,500 USDRIF
ChronoTrigger Ā· DAOStar Ā· DAOPlomats

:2nd_place_medal: Silver Tier | 1,000 USDRIF
Axia Network · Tané · Curia

Massive respect to every delegate who stayed locked in through March. Your on-chain participation and governance contributions are not just boxes ticked — they are the bedrock of a credibly decentralized DAO!

While we say goodbye to @Avantgarde, we want to formally welcome @SEEDGov as part of the delegate team.

Next step → The payment proposal will be posted on-chain via the Rootstock Collective dApp this week for community and delegate ratification, followed by fund transfers.

Full transparency. On-chain accountability. The flywheel does not stop.

Onwards to April. :rocket:

7 Likes

Thank you for the warm welcome. I will do my best to fulfill this responsibility to the highest standard.

It was also a great pleasure to meet in person and chat during EthCC with the Rootstock Collective team (@sascha.collective @tamlerner @eleanor @Georgia), as well as with community members such as @Raphael_Anode and @jengajojo (whom I had previously met at Devconnect Argentina, great to see you again). I’m genuinely very excited to be here.

PS: Speaking in a personal capacity - Marian from SEEDGov here, lol.

4 Likes

We have seen a couple of delegate teams that are eligible for the Recognized Delegate Compensation program create additional accounts and use them to increase the number of ā€œlikesā€ received. Since the current system relies on two main criteria, likes received and reading time, both of which reset monthly, it opens up room for this kind of behavior, which may not be wrong but doesn’t really reflect good merit.

We do not believe this creates any real benefit for the program. On the contrary, it risks normalizing behavior that undermines the integrity of the incentive structure and may encourage other delegates to follow similar practices.

We believe the program should be based on ā€œGood Meritā€ and encourage continuous, meaningful contribution. For example, the behavior we have seen from @Axia, @DAOstar_gov, @Tane, and others who consistently engage by providing thoughtful comments and constructive feedback to grantees almost every day is what should be encouraged.

To address this issue, we suggest:

  • Excluding likes received from accounts within the same team for the current month and going forward, or

  • Introducing a rule that disqualifies delegates from monthly rewards if such behavior is detected

This situation also highlights a broader limitation in the current system. The existing design still contains loopholes that allow for manipulation of engagement metrics. It is worth noting that this is partly what motivated our team’s earlier work on the PRS system — an alternative framework designed to provide greater transparency into delegate contributions and make this kind of behavior harder to exploit at scale. We raise this not to reopen that discussion, but simply as context for why more robust approaches to measuring contribution were explored in the first place. We believed this could better reflect actual forum contributions and reduce this type of behavior.

3 Likes

We agree that gaming is unacceptable.

At the same time, we want to keep the metric system very simple and introduce additional metrics only if we have fully understood second order effects.

Peer-review systems fall short here because they effectively make new contributions harder (your peers will judge you), or at the very least, if you have to make them to get paid, ensures they ā€œstay in the laneā€. This has been proven in the scientific community without the shadow of a doubt, and a way out there is not easy to find.

So what are alternatives:

  • Not count likes, as to easy to game
  • Count likes, but with very low weight
  • Count likes and check inter-team likes and discount those (hard to do, lots of tracking required)
  • Count likes and set likes of each time engaging in this behaviour to 0 for the month

Likes are always easy to game. It’s super hard to prove that an account doing lots of liking is in fact not a team member, vs an engaged community member.

We can now use likes as a tiebreaker at best and just count read time, and/or penalize teams that engage in like farming.

Imo ā€œread timeā€ is the single most useful metric in Discourse. It’s hard to game, is being actively developed further by Discourse and measures the most useful thing a delegate can do: watch the protocol! Be super high context. Spot bad actors early. Be engaged.

My personal opinion is that it’s the only metric that matters.

Some might argue that making proposals is good. But I disagree. It can be a massive net negative. Good proposals are valuable, but who is to judge that? A question that is very tricky to answer well, and we’d prefer to stay away from. Crypto is here to reduce middlemen.

Also, making a proposal, let alone a successful one, takes tons of forum time, which means it is reflected in read time.

We’re deliberating what to do about this situation and will announce a solution closer to the end of the month.

In the meantime I would caution teams to refrain from farming behaviour. It will not be worth it.

6 Likes

Thank you for flagging this behavior @Curia.

Creating fake accounts to inflate engagement metrics is an integrity breach and should be penalized. Delegates engaging in this behavior knew, or should have known, that it undermines trust in both the compensation framework and the credibility of the delegate program more broadly. This is especially disappointing to see in a relatively small group of experienced, high-context delegates who should understand the importance of maintaining trust in the system.

More broadly, this also highlights the limits of engagement-based metrics. If the current framework already captures participation and activity, I would be more interested in seeing future iterations evolve toward outcome-oriented measures of delegate contribution. In particular, I think there is more value in recognizing ecosystem problem-solving, upstream governance design, and strategic contributions that improve the DAO’s decision-making capacity over time. Those are harder to game and, in my view, closer to the kind of value delegates should actually be creating.

1 Like

I am posting this as team member, coffee-crusher to share my own personal opinion. I fully support @Axia position here. While we should always strive toward the high-standard contributions that @Axia describes, I also fully agree with @Raphael_Anode that tracking read time is a critical metric for delegates, and should be used exclusively going forward.

Putting in the work to actually read the forum requires significant time and effort - it’s work. It is also the most direct way to gain the deep context necessary to understand the complex grant proposals and the inner workings of the Collective. Without that foundation, we cannot provide the level of oversight the community expects.

I’m also disappointed and concerned that some delegates have created fake accounts to inflate their engagement to gain the rewards program. To our team, that discounts the hard work that we and other delegates put into reviewing and engaging with grant authors to ensure that the best proposals are developed and funded.

Finally, I agree with @Axia that we may need to evolve towards an outcome-oriented measurements. We should explore a stewardship incentive for delegates who deliver the kind of high-impact, strategic work that Axia highlighted. This ensures we are rewarding both the essential work of staying informed and the professional results that move the Rootstock Collective forward.

1 Like

Thank you to everyone for the thoughtful engagement on this.

We support @Raphael_Anode’s direction of read-time as the primary metric for the current cycle. It is the least gameable signal Discourse tracks natively, rewards sustained attention, and keeps admin overhead minimal. We agree that likes have proven too gameable to carry weight.

We also share the spirit of what @Axia and @DAOstar_gov raised around outcome-oriented measures and the ā€œstewardship incentiveā€ framing. Read-time captures engagement, but not the differentiation between delegates whose analysis actually shapes decisions and those who merely stay current. The challenge we see with outcome-based measures is the unresolved design question of who decides what counts as high-impact work. Peer review has known pathologies that @Raphael_Anode already raised, admin-assessed review adds centralization, and time-delayed outcome tracking creates lag. None of these is disqualifying, but each needs serious design work before any metric can ship.

On this direction, @SEEDGov has firsthand experience with Arbitrum’s delegate incentive programs across multiple iterations, including attempts to incorporate outcome-oriented measurement that were ultimately set aside in favor of activity-based frameworks. We would value their perspective on what worked, what did not, and which lessons transfer to a smaller, more focused program like RC.

We welcome continued exploration as the program evolves.

2 Likes

Hi @Tane thanks for the ping and for the opportunity to share our experience on this topic.

At SEEDGov, we’ve had the opportunity to act as Program Managers for delegate programs in two different governance: Arbitrum and Velora (formerly ParaSwap), where we also served as Governance Facilitators. While both DAOs differ significantly in size and complexity, we approached them with a similar framework, adapted to each context.

Initially, delegate programs were designed in a fairly straightforward way. To determine eligibility and rewards, we tracked metrics such as voting participation in the month (relative to total proposals), voting rationales, voting over the past 90 days, and delegate feedback. The latter was the only subjective component, assessed through a structured framework where we evaluated the quality and impact of each delegate’s forum participations, prioritizing substance over quantity. These factors combined into a monthly score that determined rankings and compensation.

Since this system blended objective and subjective criteria, the main challenge was ensuring transparency, clearly tracking activity, justifying evaluations, and remaining open to feedback and corrections. Overall, we believe it worked well, as delegates generally trusted the process and were satisfied with our management.

Over time, we identified an opportunity to better differentiate between delegates who fulfilled basic responsibilities (votes and justifications) and those who made higher-impact contributions. This led us to design a revised model (which we implemented in Velora, though not in Arbitrum), splitting the program into two components:

  • Base delegate activity, focused on objective metrics such as voting and justification.
  • Extraordinary contributions, including value-adding proposals, meaningful feedback that improves original proposals texts or lead to its withdrawal if they identify any potentially harmful elements, content creation and diffusion, partnerships or integrations, participation in panels or events, and other activities beyond standard governance duties.

This second component was evaluated qualitatively, based on predefined criteria such as impact, quality, and effort. Delegates who only performed baseline activities received lower compensation, while those making additional contributions could significantly increase their rewards.

We implemented this model experimentally in Velora, and it performed very well. Delegates, motivated to improve their compensation, actively engaged in higher-value contributions, creating a more dynamic and productive environment for the protocol. The biggest challenge was to establish a framework that would ensure delegates do not engage in minor, inconsequential activities just to farm points, but are instead incentivized to make contributions that have a real value and impact.

Encouraged by these results, we began working on further formalizing and structuring this approach with a more comprehensive framework. Unfortunately, Velora later faced a financial downturn, which forced the suspension of all non-essential programs, preventing us from continuing its development.

Even so, we are very satisfied with both the approach and the outcomes. While voting and justification are critical for governance security, we strongly believe delegates can contribute far beyond that. With the right incentives, they can evolve into active contributors, generating tangible value and supporting the overall growth of the protocol or network.

Of course, we are always open and available to share our experience in more detail and brainstorm ideas tailored to Rootstock’s specific need in this area.

5 Likes

Thank you @SEEDGov for posting your teams experience. Regarding the Extraordinary contributions at Velora, how did you measure impact? Was it more of an subjective analysis? Personally I like the idea of the base line contributions + extra contributions as the ā€œcarrotā€ to entice for higher valued contributions, but would be interested on how you measured and validated these Extraordinary contributions at Velora. Where there any issues that occured with the delegate who may not have been recognized for their extraordinary contribution and was there an appeal process? Personally, I can see that getting very complex and leading to subjective opinions on contributions, but I would be interested to hear how the team dealt with these issues. Tks for posting!

1 Like

Hi @DAOstar_gov !!

The one you mentioned is the core challenge: while objective metrics are easy to track and measure, introducing subjective components significantly increases complexity and requires safeguards to ensure the system remains solid:

  • First, establishing a clear framework that defines in advance what qualifies as an extraordinary contribution eligible for additional compensation. This required close collaboration with the Velora team to understand the protocol’s needs and how the DAO could effectively contribute. Based on this, we created a well-defined list of desirable delegate activities.
  • Second, defining a consistent evaluation methodology. In Velora, we used a 0–5 scoring system based on the impact of each contribution, their relevance (if the contribution were align with the protocol needs), effectivness (if the contribution achieved the intented objetive and produced some tangible impact), among others. We also developed a guidelines to define what ā€œimpactā€ meant. For example, for content creation we evaluated reach, originality (avoiding repeated topics), level of personal input (not purely AI-generated), etc. For partnerships, we looked at metrics such as TVL or generated volume. For governance contributions, we assessed whether proposals addressed meaningful issues or whether feedback led to significant changes, such as improves proposal or even withdrawing or flagging a malicious proposal.
  • Third, ensuring full transparency. All tracking was maintained in a public spreadsheet shared in the forum, where we not only listed each delegate’s contributions but also included detailed analysis, assigned scores, and clear justifications aligned with the framework.
  • Fourth, remaining open to feedback. Being accessible for questions, comments, and criticism is essential. In Arbitrum, we initially implemented an appeals process, but it had unintended consequences: delegates focused more on disputing scores than contributing, and we as Program Managers spent more time handling appeals than adding value. As a result, we removed it in a later iteration. In Velora, we chose not to include an appeals system from the outset, instead prioritizing stronger justification of evaluations, while keeping direct communication channels open for any concerns.

When subjective elements are involved, the challenge is not only to be fair and impartial, but also to be perceived as such. This is why we invested significant effort in clearly explaining and justifying every evaluation based on a predefined and transparent framework.

It’s also important to note that this system relies heavily on trust and professionalism. The Program Manager plays a critical role and must be efficient, consistent, open, and transparent to build confidence that evaluations, while subjective, are as objective as possible within the defined guidelines. In our case, we did not encounter conflicts with delegates, which we consider a strong validation of the approach and our work.

If you’d like to explore this further, we share with you the program framework. It’s worth noting that this was a 6-month pilot, and based on the experience, we were already working on a more robust version aimed at further reducing subjectivity through clearer and more predictable evaluation criteria.


PS: I’m happy to dive deeper into any aspect if you have questions or would like more details about this program, one I’m personally proud of and in which I was directly involved in its design, management, and subsequent iteration for a next phase that, due to Velora’s financial situation, we ultimately couldn’t implement (Marian speaking here, haha).


Perhaps some of the delegates who participated in this program could share their perspective from the delegates’ point of view.
cc. @Curia @Ignas @Tane @DAOplomats

2 Likes

@SEEDGov I appreciate the intent here, but I am not supportive of this direction.

From my experience as a delegate and contributor across multiple DAOs, including Arbitrum where SeedGov served as program manager, delegate compensation works best when it is simple, legible, and built around clear incentives and low-friction participation. Once a framework relies on a third party to assess contribution quality, it introduces bureaucracy and subjectivity without clear evidence that it improves governance outcomes.

In Arbitrum, this also created unhealthy dynamics around point-scoring, appeals, and disputes over evaluation, which added a lot of overhead, program costs, and negative sentiment without obviously improving governance quality.

Delegate reward programs work better when they minimize process burden for both the program manager and participants. If a requirement is mostly functioning as bureaucracy rather than meaningfully improving the program’s goals, it is usually better to simplify it or convert it into a lighter incentive. This is aligned with the system we currently have in The Collective that is facilitated by @Raphael_Anode and @Kaf_Anode

I was not involved in Velora specifically, but there is now enough signal across the governance space to suggest that putting an intermediary in the position of judging delegate quality is usually not the strongest design choice. Rootstock would be better served by a simpler and less subjective approach.

Tagging Carmen from @DAOstar_gov who recently raised an important point that now is a good time for delegates to prioritize the strategic alignment call that we prepared for a few weeks ago.

2 Likes

I have to align with the concerns raised about introducing subjective, third-party evaluations into the compensation framework. Programs work best when they are efficient and straightforward for everyone involved. The delegates are already proving their vigilance, just looking at the recent voting records and active daily discussions on telegram group shows we have a highly engaged group that is keeping a close eye on the treasury. Rather than adding complex evaluation criteria, we should add simple, high-visibility engagement metrics. Setting up a standardized monthly delegate sync—similar to what works very well in Arbitrum, 1inch, or Uniswap, and tracking attendance on regular ecosystem X Spaces would do far more for our visibility

X spaces are a good idea… will jam on this a bit…

Arbitrum delegate program was one of the most expensive to manage that I know off.
We now have a simple, effective and very high level delegate program here.

We looked a like manipulation in detail and honestly it was pretty easy to spot and not that heavy.

I think the discussion around top level alignment is really important, because this defines what matters. Everything else is then downstream from there.

4 Likes

Just to clarify, we are not proposing to implement in Rootstock the model we designed for Arbitrum and Velora. In response to @Tane’s comment and @DAOstar_gov’s question, we were simply sharing our experience in those governance systems. These frameworks were specifically designed for the unique needs and characteristics of each ecosystem and evolved through extensive testing and iteration alongside their respective Foundations. Every case is unique, with its own dynamics over time and particularities, and it’s not a good idea to simply copy and paste frameworks from other experiences.

Regarding attending community calls may be necessary , but we do not believe it should be a primary driver of compensation, attendance alone does not generate any meaningful impact for Rootstock. Specifically on Uniswap, as mentioned above, there was a particularly notable case of delegates who joined the call simply because it was a requirement for receiving contributions, but did not participate in any way; there was a suspicion that some of they joined the call and left the screen running in the background. Unfortunately, when implementing ideas like this, it is necessary to first identify potential vulnerabilities and how gameable they might be, and take steps to prevent them.

100% agree with this.

2 Likes

Thanks @SEEDGov for sharing the experience.

Speaking as Pud. Curia has participated in both Arbitrum and Velora. While I don’t have direct experience myself, one of our team members, England, was responsible for engaging in both DAOs, so this reflects his experience as well as our team’s view.

From what he shared, my main hesitation with this model is that the program is still quite dependent on the Program Manager. Even with structure in place, evaluation ultimately relies on manual judgment, which opens the door to bias and inconsistency. Curia has faced misjudgments before and had to self-declare to correct them, that’s not a failing of any individual PM, but structurally, leaning this heavily on manual assessment is hard to scale.

Outcome-based evaluation sounds compelling in theory, especially in a DAO context. In practice though, the pattern across iterations (including in the programs you’ve managed) has been a gradual drift back toward activity-based frameworks. I find that more convincing and practical as a starting point: more legible, easier to automate, and less surface area for dispute.

For those reasons, I’m not convinced this is the right fit for Rootstock at this stage. I’d personally prefer to see the program move toward something more transparent and automated over time, with less dependent on individual evaluation.

Hey all,

I’m mostly following all the comments silently, but attentively.

I agree to look for leaner and more automated metrics, even though they are known to not be trivial (case in point: our experiences over likes count and past and present discussions about other metrics).

I would really avoid evaluation based metrics, as these would cost large additional dedication from each delegate to evaluate, contest and defend, for an indirect contribution to the ecosystem. One could argue it’s producing better governance, but it’s inherently inneficient, and as such, should be avoided.

I want to reinforce what others have said that we should direct our focus back towards having strategy alignment sessions together with Rootstock Labs, as with or without complensation incentive, this alignment would give delegates better capacity to help the ecosystem, foster partnerships, leverage professional experience and connections, position accordingly on events, vote better in still uncertain initiatives such as investing in events, etc. I’ll gladly do all of that even if there’s no direct compensation, if there is, great.

1 Like

I fully agree with what has been discussed so far. I believe the first step is to move forward with the proposed dialogue between the Lab and the DAO to clearly known and define strategic objectives, align visions and expectations, and then develop concrete actions based on that.

Right now, I see many delegates who are motivated and eager to contribute more, but lacking clear direction on where to focus their efforts, which ends up creating a degree of paralysis while waiting for those key questions to be resolved.

I also joined the Velora delegate program, from what I saw, each model has its own strengths. I don’t think one approach is fully better than the other.

For Rootstock yes I lean toward delegate rewards working best when the system is simple, clear, and easy to track.

I understand that using voting power as a tie breaker when reading time is equal has already been decided. At the same time, there can still be room for small bonus points for extra contributions that normal metrics may miss. Like I commented before, posts or governance education, or promoting Rootstock initiatives on X can bring value to the ecosystem and help grow awareness.

But any extra criteria should match Rootstock Labs’ goals and what the ecosystem needs.

One more thing, every program also needs testing and iteration to get better over time. Looking forward to see our program evolve