![]() | This is an
essay. It contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article, nor is it one of
Wikipedia's policies or guidelines, as it has not been
thoroughly vetted by the community. Some essays represent widespread norms; others only represent minority viewpoints. |
![]() | This page in a nutshell: RfA inflation is the inflation of standards at RfA, which is a large problem on Wikipedia. The solution is not to judge candidates too much on statistical metrics, but instead on level of trustworthiness. |
RfA inflation is the belief that as time goes on, the standards for an RfA candidate rise. It is a result of a statistical approach to candidate analysis, with the statistics needed gradually rising with time. The most obvious dangers being that of an overall decline in successful RfA, leading to a decline in active administrators. Problems will result if there are an inadequate number of active admins to do the tasks that require their toolset.
RfA inflation can take many different forms, but it often applies to statistical analysis of RfA candidates, and the demand for ever increasing positive statistics (often in the form of editcountitis) can be a factor in RfA success.
Here is a list of a few of the forms it can take.
The reasons for RfA inflation are probably as complex and varied as price inflation. But a few theories can be put forward.
All of this contributes to a loss of potential admins. For example, as can be seen here, the number of unsuccessful RfAs is going down. There were 155 unsuccessful RfAs in 2010, but only 38 in 2015. This decline doesn't affect just the unsuccessful RfAs. In all successful RfAs, there were 75 admins appointed in 2010, but only 21 in 2015. The rate of all RfAs is going down, and this is largely because of higher standards for admins, as discussed above.
The main problem that may result is having too few active admins for the admin workload. While the unbundling of tools has helped to some extent, some tasks can only be done with the admin toolset. While at present there does not seem to be a particular crisis, this could happen in the future, especially if RfA inflation continues unabated.
The solution is not to judge candidates too much on statistical metrics, but instead on levels of trustworthiness and need for the tools. While statistics help to determine overall levels of participation, they do little to help with assessing either of these. A candidate may have low participation in comparison to many of the editors here, but is it really a good way to assess trustworthiness? A reasonable track record can be established from a few thousand edits.
Also, try not to see adminship as a trophy or status symbol or reward for good contributions. Instead it should instead be seen as a necessary toolset for certain activities on Wikipedia. The only two important questions are will the editor use the tools correctly, and will they be likely to use them. If they are both answered in the affirmative then questions whether they "deserve" them become moot. Lessen the statistics, and direct your arguments to these points.
Another good argument is that of being a net positive. Admin candidates are always going to have made mistakes in the past, but these should be countered with their positive contributions. Demanding higher and higher levels of perfection may run counter to the well-being of the project.
Perhaps one of the most inflated requirement is prolific content creation, with the belief that content creation teaches editors how to interact with other editors. But there are arguments against this, as other interactions, such as anti-vandalism work and dispute resolution, are also very valuable at teaching this, which should not be discounted.
![]() | This is an
essay. It contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article, nor is it one of
Wikipedia's policies or guidelines, as it has not been
thoroughly vetted by the community. Some essays represent widespread norms; others only represent minority viewpoints. |
![]() | This page in a nutshell: RfA inflation is the inflation of standards at RfA, which is a large problem on Wikipedia. The solution is not to judge candidates too much on statistical metrics, but instead on level of trustworthiness. |
RfA inflation is the belief that as time goes on, the standards for an RfA candidate rise. It is a result of a statistical approach to candidate analysis, with the statistics needed gradually rising with time. The most obvious dangers being that of an overall decline in successful RfA, leading to a decline in active administrators. Problems will result if there are an inadequate number of active admins to do the tasks that require their toolset.
RfA inflation can take many different forms, but it often applies to statistical analysis of RfA candidates, and the demand for ever increasing positive statistics (often in the form of editcountitis) can be a factor in RfA success.
Here is a list of a few of the forms it can take.
The reasons for RfA inflation are probably as complex and varied as price inflation. But a few theories can be put forward.
All of this contributes to a loss of potential admins. For example, as can be seen here, the number of unsuccessful RfAs is going down. There were 155 unsuccessful RfAs in 2010, but only 38 in 2015. This decline doesn't affect just the unsuccessful RfAs. In all successful RfAs, there were 75 admins appointed in 2010, but only 21 in 2015. The rate of all RfAs is going down, and this is largely because of higher standards for admins, as discussed above.
The main problem that may result is having too few active admins for the admin workload. While the unbundling of tools has helped to some extent, some tasks can only be done with the admin toolset. While at present there does not seem to be a particular crisis, this could happen in the future, especially if RfA inflation continues unabated.
The solution is not to judge candidates too much on statistical metrics, but instead on levels of trustworthiness and need for the tools. While statistics help to determine overall levels of participation, they do little to help with assessing either of these. A candidate may have low participation in comparison to many of the editors here, but is it really a good way to assess trustworthiness? A reasonable track record can be established from a few thousand edits.
Also, try not to see adminship as a trophy or status symbol or reward for good contributions. Instead it should instead be seen as a necessary toolset for certain activities on Wikipedia. The only two important questions are will the editor use the tools correctly, and will they be likely to use them. If they are both answered in the affirmative then questions whether they "deserve" them become moot. Lessen the statistics, and direct your arguments to these points.
Another good argument is that of being a net positive. Admin candidates are always going to have made mistakes in the past, but these should be countered with their positive contributions. Demanding higher and higher levels of perfection may run counter to the well-being of the project.
Perhaps one of the most inflated requirement is prolific content creation, with the belief that content creation teaches editors how to interact with other editors. But there are arguments against this, as other interactions, such as anti-vandalism work and dispute resolution, are also very valuable at teaching this, which should not be discounted.