Appreciate @rspa_StableLab for this detailed breakdown and for kicking off this important discussion on delegate compensation.
We support the initial criteria (voting + rationales) and the data-driven approach. The need for a fair tie-breaker is clear, and “Likes Received” is a good, available metric and a useful proxy for positive community contribution.
As active delegates ourselves, we believe our primary focus is (and should be) on high-value activities that drive the Collective forward. Right now, as you know, this is overwhelmingly centered on the Grants program.
An effective compensation system should align incentives with our core responsibilities. While “Likes” are a useful measure of positive engagement, they don’t specifically capture the full value of our main job: reviewing grant proposals, providing in-depth feedback, and performing due diligence.
We believe the tie-breaker should be enhanced to more directly reflect this. To build on the current framework, we’d like to suggest a set of metrics that balances general engagement with this critical grant work:
-
Likes received (Grant Proposals): This provide a proxy to measures the quality and community validation of a delegate’s feedback on grants.
-
Comment (Grant Proposals): This directly measures a delegate’s primary role of providing in-depth feedback and performing due diligence.
-
Days visited: This measures consistent, day-to-day presence and awareness of forum activities within the month.
A crucial refinement for any metric using “Likes” (both general and grant-specific) would be to only count likes from forum users at Trust Level 1 or higher. This simple filter would ensure the metric reflects genuine community validation and significantly reduces any potential impact from bot or spam accounts.
Collaborating on the “Simplicity” Goal
This brings us to your most important point: the system must be “extremely simple” with low administrative overhead.
We completely agree. While all this data is publicly available on the forum, manually compiling, filtering, and weighting these metrics for all delegates each month is the exact “administrative overhead” the original post hoped to avoid. This is the central challenge: How do we get an accurate, role-aligned, and bot-resistant system that is also simple to administer?
We believe the solution is to have a simple automation pipeline for this. The facilitators’ time is best spent on other things, not on manual data entry and filtering.
We’ve compiled a table detailing the October performance for current recognized delegates based on our suggested metrics and some other additional metrics.
We think the community should explores adopting an simple automation pipeline for tracking tie-breaker metrics.
This pipeline would automatically track and report on the key metrics the community decides on (like the four we suggested).
This approach achieves all goals:
-
It’s “extremely simple” to administer: It removes the “administrative overhead” you mentioned. Facilitators would receive an automated report, or this data could feed directly into the current dashboard you’re working on via API.
-
It’s transparent and verifiable: An automation pipeline is consistent and auditable by anyone. However, community members could just as well verify the data by manually counting it, as it’s mostly publicly available on the forum.
-
It’s far more accurate: It directly aligns compensation with the delegate’s main role.
-
It’s more robust: It filters out low-quality/spam engagement.
A key benefit of this automated approach is also its flexibility. As the collective activities evolve and our “main role” shifts, the community can agree to adjust which metrics are tracked by the pipeline.
Furthermore, this opens up much more design space for future iterations. Instead of just a tie-breaker, these automated metrics could be combined into a more comprehensive “Composite Score” with different weight adjustment. This would allow us to move beyond simple pass/fail criteria and proportionally reward all valuable contributions, giving us a much more powerful tool to incentivize the specific behaviors the Collective wants to promote.
We are happy to collaborate with the facilitators and the community to build the MVP for this as a public good. We believe that by working together and leveraging automation, we can build a robust, fair, and simple compensation framework for all delegates.