Benefits of having "trusted / high quality" user recognition
Benefits of having "trusted / high quality" user recognition
Starter thread / A couple of conversation starters
Philippe opened the taskforce asking: "What have we agreed on in terms of quality? Where is the community in terms of the quality discussion. What do we NOT agree on? What have we not discussed about quality, as a community? What sort of information would be useful, in terms of helping us think this through?"
Discussion initially moved to the Five Pillars, newcomer guidance, and the nature of reliable sourcing, before branching out into other threads. These points are summarized in their own threads.
Thoughts to date
- Starting point
FT2 followed up an earlier post which stated that "creating an official usergroup of trusted (or good quality, or senior) content editor might be the single biggest step towards helping here that we can make":
Suppose a community process existed for recognizing editors who are trusted in their content work. This means (eg) they consistently work well on content, edit neutrally, don't edit war, collaborate instead of fillibuster, make generally good edits, cite well, improve content, debate issues instead of attacking personalities, and have a good general ability. No "extra rights" but...
- They have an investment in extra standing (reputation-wise), which is valuable and a source of reward. It will tend to be guarded and incentivize people.
- We can patrol articles and edit wars easier by highlighting these users in the article history (valuable information).
- Not all editors will want or seek adminship, this is a parallel way to recognize those who edit content. If it can be made something "everyone should aim for" we might have many thousands of editors flagged this way, a strong impetus for a quality based community.
- In an entrenched area or difficult edit war, Arbcom or the community can now say "any trusted content editor may edit the article. Others = talk page only". Many many trusted content editors, so no real POV or "too narrow editors" issue here. Anyone who wants to edit the mainspace and can get community agreement they edit content well, may join in (others only on the talk page). Instant stability, good decisions, consensus, and quality, on problem and edit war articles! No harm done, no bias added, articles still edited by a wide pool that anyone can join.
- (added later) While wiki isn't a "profession" this provides a way users can stretch their skills and a means of self evaluation and development as editors. A "recognized wiki-editor" qualification would also be good for ethos.
If we only have 2 - 4 recommendations, this might be one with profound scope to help in many ways - stability, quality, entrenched edit wars, experienced editor enjoyment, and incentive to gain good content editing skills and edit well.
- General discussion
Randomran felt it was a good idea if de-politicized and factional gaming could be avoided. Multiple FAs and a good track record of civility and consensus building would be good criteria.
Piotrus felt it made some good points but expressed concern that a user who was reliable in one area might have trouble in another, or that users editing in divisive areas would be unable to gain such standing due to mud slinging by other editors and cliques. He suggested that once a large number of trusted/senior editors were appointed, the task could be left to these users (who would presumably be less prone to mud-slinging): "In other words - I can trust the quality editors to make quality decisions, but I am increasingly disappointed with flaming and mistrust-sawing comments from "the peanut gallery" in various discussions I see".
FT2 responded: 1/ it would also need an effective and hard-to-game removal process; 2/ "appropriate recognition and self management where they have COI or other problems/strong views...[the] idea implies good editorship on articles the user cares about, too"; 3/ pure self selection can encourage divisiveness, where we're aiming for mass involvement and good standards. A two-step process might resolve the issue.
Bhneihouse liked the "overall feel" and wanted to further refine it.
- Non-anonymous experts
Piotrus noted that giving non-anonymous experts extra standing could be another (or complimentary) approach. FT2 noted that experts don't always make desirable editors and have the same learning curve as any other person on how to edit appropriately:
- "This is about recognizing people who know how to edit and are accepted as good at doing so. Nurture those, and [everyone benefits] including for experts":
- "I'd tell the expert that (like joining any new project) they need to learn how to edit Wikipedia, which will be different from how they edit their own papers. But they can edit freely, and (since they are bright and used to scientific collaboration) they'll surely be recognized as a trusted content editor for our purposes nice and quickly, if they take a few minutes to understand how we work here. In fact I'd make that part of the "New user wizard" ("Do you have formal credentials in any field you plan to edit?" + guidance)."
Bhneihouse suggested a list of known credentials may be useful for screening.
- Sue Gardner's post
Sue Gardner (WMF) stated that this would be a useful and good idea, for both new editors (to identify trusted editors) and for not-new editors. She expressed a strong preference that such recognition should be assessed automatically rather than manually to address scaling and "popularity contests", and save time.
FT2 felt automation could help and could cut down the cases needing review, but that the benefits flowed from the idea that "some users can generally be trusted to do right, in article and article discussions, of any kind. Those are the users we want in this pool, because once identified, they provide a large population of experienced content writers not needing much guidance or checking, and capable of being given heavy disputes to put into good editorial order... The benefits here flow from their acknowledged trust to do right (broadly speaking) on any content matter, up to and including self-management of bias, interaction style [etc]". He felt this could not be assessed without human involvement.
Bhneihouse agreed about non-automation, and agrered with Piotrus and Sue on "popularity contests". She noted that "Perhaps a pivotal point in quality control is how Wikipedia "approves" and trusts editors? Perhaps another pivotal point is the actual "structure" of this process?"
Sue Gardner commented in response that this would be
- "a marker/label for people who are particularly trusted to have good judgment. Probably these would be people who've been around for a while, and understand the policies well, who are reasonable and thoughtful. I think that's a great idea. I think new editors would really appreciate being able to tell at-a-glance if an editor they didn't know was someone they should trust and listen to..."
She also noted the distinction would greatly help newcomers, who could understand the editing better and seek reputable advice, as well as helping those who deal with editorial behaviour and disputes. It would extend the usual network of trust, which usually doesn't scale well. She did not like the term "trusted editor" though (implies others are not trusted) and suggested "senior editor".
She also worried that the system for gaining the standard would be gameable or (per Piotrus) lock out editors in controversial areas. She felt the decision should be made by "thoughtful... experienced people" and editing criteria, not by simple voting, and perhaps a "trusted team" to identify such users. She concluded it was a good idea she would like to see work.
- How would such users be selected?
Piotrus suggested a user should show quality content (1 FA/5 GAs/50 DYKs), and be trusted only in areas he has a reputation (WikiProject based), and this would be hard to game or disrupt. FT2 stated that writing certain content would not necessarily correlate to trusting their editorial approach generally, would not show neutrality or good conduct on their "pet subjects" or appropriate talk page approaches. It could at best be evidence.
Randomran agreed some processes are "vulnerable to whim and personal opinion" and saw FA as one of the few that is hard to game as an indication that a user understands quality writing, adding later that "The human component can be there as a screen, as a veto, but people should really be judged by accomplishments that the widest number of Wikipedians cannot deny".
FT2 noted that good content writers contain a "fair proportion of users who couldn't meet the kind of role we're talking about" and suggested two possible approaches:
- Specific criteria (user presents a portfolio meeting set criteria; objections must meet defined and evidenced criteria too). It's prescriptive and somewhat gameable but much harder to game. (details)
- a 2 stage highly automated process whereby users only needed 50% community approval (hard to game and mass participation) and a 75% "trusted user" approval (high standards, veto, allows existing senior editors to see any community views and concerns), both parts held via SecurePoll for efficiency and to prevent "popularity contests". (details, also discussed below)
Woodwalker was "not against formalizing the status of quality users" but was concerned that expertise in one area might not equate to expertise in another. FT2 highlighted that a generally well reputed and skilled editor could be more of an asset in the contexts under discussion due to issues of foibles and bias, collaboration and mass editing skills and the non-expert knowing how to help others make the best of their input (including experts). "If we're assessing what kind of editor can be broadly trusted to work on all kinds of difficult articles unsupervised... in a proper way... [then these qualities] will get you that person... a PhD won't". (link). Woodwalker agreed but suggested not calling them experts if they were not, and preferred a criteria-based approach. Piotrus suggested looking at their activity record, and only considering concerns from the last 6 months (the criteria had included a 9 month cutoff).
FT2 noted a panel would have the same issues of gaming and politics, and that we should trust the wider community; it is easier since all issues (including most alliances and gaming) would be visible and public. Rather than trying to be perfect on selection, create a method that is 95% valid but "slightly able to be gamed", along with a "clear and standardized removal process" and "some kind of scrutineers panel [for] cases claimed to be grossly affected by bias and canvassing, or where the results don't reflect appropriately on the user". Woodwalker agreed that community not panel was appropriate, and Piotrus agreed that addressing "popularity contests" was very important indeed. Philippe endorsed designing for 90% ("good enough"), and Bhneihouse stated the two stage process sounded good and process now needed adding to handle the exceptions.
- Further discussion on the 2 stage proposal
FT2 described the latter as "a hybrid of enwiki Mediation Committee's nomination method (filter[s] good quality users and operates historically with no drama whatsoever) and a modification of the SecurePoll tool already in place... a bit more involved than 100% automation, but it is simple (once set up) and keeps almost all the benefits of automation, all the benefits of user involvement, and very little of the drawbacks of either, when merged." Its design goals were stated as "automation, low gameability, simplicity of experience to users, very low scope for politicking/dramatizing/popularity contests, and low time needed by participants".
- Some discussion took place on its talk page:
Randomran stated he had concerns it can be gamed and a threshold is needed to weed out bad applicants and prevent fillibustering.
FT2 noted that this would not help a clique, because all the community can do is 1/ stop someone getting 50% (which is still an easy level for a decent editor in such circumstances) and 2/ raise concerns, and filibustering doesn't work because it isn't a "debate". Community concerns will be publicly visible after the 1st stage and inform the second, but the 2nd stage is not easily influenced by partisan cliques because its constituency is editors already considered to be high quality and of good judgement.
Piotrus agreed this "sounds plausible and indeed should not be very easy to game".
There was some discussion how to bootstrap the process (the users who will operate the 2nd stage at the start) - Randomran suggested using 2+ FA writers. FT2 stated it was a once-off issue and needed the highest quality of content editors to get a good start; he suggested using the subset of FA/GA writers who have also passed RFA (the latter attesting to other areas of trust, awareness and judgement). Since RFA often requires content work this is a substantial pool of FA/GA writers and "probably enough" to start it off.
Randomran agreed that 2+ FA writers who had also passed RFA would be a strong pool, but (being devils advocate) was concerned that it might be taken as a cabal ("I'm not sure this is a bad thing in practice. But in principle, a lot of people just hate cabals"); that it might still just reflect popularity; and might still exclude editors in controversial areas. He felt it could perhaps be strengthened against these issues.
A number of threads talk about disputes, POV wars, retaining good editors in the face of bad editorship, etc etc. I said in a post that creating an official usergroup of "trusted" (or "good quality") content editor might be the single biggest step towards helping here that we can make, given we have only 2 - 4 recommendations total and want the ones that will do most. Here's my "thought experiment" why so:
- Suppose there was a process similar to RFA (but easier!) for recognizing a user as "trusted" in their content work and article talk interactions only. By this I mean, they consistently work well on content, they edit neutrally, don't edit war, collaborate not fillibuster, they make generally good edits, they cite, they improve content, debate issues instead of attacking personalities, and have a good general ability at content work. A user asks the community if they are trusted this way, like RFA or like "rollback" or any other trust level the community can grant. No extra rights in editing, but...
- 1. We now have users with a specific standing. People who worked for that, will guard their editor standing and not want to lose it by bad editing. A vested interest in high standard content work. Incentive!
- 2. We can patrol articles and understand edit wars much easier, because in article history we can highlight edits by "high quality" or trusted users, or bots can spot patterns involving trusted and other users. Makes patrolling and understanding edit wars much easier. Of course not all "trusted users" are good, not all others are "bad", but it's valuable information on a dispute if you can see it in history.
- 3. Not all editors are equal in approach (in their editing quality). Not all want or would get adminship. So we have something people can aim for, as an ordinary editor. Enwiki has 100k's or millions of editors, but only 1700 admins. I'd like to see this so popular, so much "something a new editor is guided towards" by newcomer help and other things discussed, that most editors who stay round a while will ask for the trust of this. Maybe 20k such users. Think what this does! (And also retains editors at Wikipedia - if you gain formal recognition/status, it's incentive too)
- 4. Finally, edit wars and entrenched problem areas. In a difficult edit war, Arbcom or the community can now say "any trusted content editor may edit the article. Others = talk page only". Many many trusted content editors, so no real POV or "too narrow editors" issue here. Anyone who wants to edit the mainspace and can get community agreement they edit content well, may join in (others only on the talk page). Instant stability, good decisions, consensus, and quality, on problem and edit war articles! No harm done, no bias added, articles still edited by a wide pool that anyone can join.
So if we only have 2 - 4 recommendations, this might be one with profound scope to help in many ways - stability, quality, entrenched edit wars, experienced editor enjoyment, and incentive to gain good content editing skills and edit well!
I think it's a good idea if you can de-politicize it. Be mindful of factions trying to bolster their point of view, and slam the points of view of others. The more we can quantify the better. For example, if someone has three FA's under their belt, then they clearly understand the processes and standards of quality that Wikipedia is looking for. That would get us halfway there, at least. The other half of being "trusted" on quality would be making sure they have a good track record of civility (not an jerk) and consensus building (not a partisan hack), which is harder to measure.
What you are proposing I see as an extension of the autoreviewing. I like your argument in point 4). Such an improvement would help, but by itself I am afraid it will not change much. One technicality I see is that a user can be trusted in one field, but have problems editing another. Another one is what a colleague of mine noted some time ago: "no person interested in history is "adminnable" in Wikipedia." What he means is related to my mini-essay on "sticking mud". I am afraid that in the current system, quite a few editors who create good content would fail to become recognized in a vote as such by the community (just as they would fail at RfAdm), as their disruptive content opponents would tag-team together, scream murder, sling mud and create enough disruption and mistrust at their request for application that they would fail to gather sufficient support to pass. That said, I have a potential solution: perhaps after initial few months of open voting, voting should be limited only to other "trusted content creators" (same should be likely done for voting for admins). In other words - I can trust the quality editors to make quality decisions, but I am increasingly disappointed with flaming and mistrust-sawing comments from "the peanut gallery" in various discussions I see.
I'll post some of my more specific thoughts soon. --Piotrus 20:21, 26 November 2009 (UTC)
I like both these points (Randomran, Piotrus).
- The concept would need an effective removal process that's hard to game and less easily used/abused.
- Note some people will be prepared to build up a 3 month track record to "get into" a topic this way. But - if removal is fair and easy, this isn't a problem - it's like IP block exemption that way - takes time and effort to get it, and it's easy to lose if abused. It doesn't need to be perfect, and the odd gamer or exception isn't a problem either. It's enough if it cuts the problem right down, and those users who do evade are then easy to deal with because it's very few, and because most other users on the topic are good balanced users who'll handle it quite properly, not edit war or dramatize. So it's "self repairing".
- To address Piotrus' technicality, "trust" in this sense includes appropriate recognition and self management where they have COI or other problems/strong views. Anyone can be nice and neutral on an article they don't care about. This idea implies good (reliable, trustworthy) editorship on articles the user does care about too.
I'm fine with self-selection after a while. But it encourages divergence between "trusted content editors" and "all editors". Maybe look at enwiki Mediation Committee for a better way - section for mediators to comment, section for anyone else to comment, and set criteria for acceptance/veto.
So for example it might need a user to fill a template of evidence on their editing, and get >= 50% at community feedback from editors with > 100 mainspace edits, plus >= 75% from at least 10 trusted content editors. We want to encourage mass involvement and good standards, so keep it based on pre-defined data and agreed percentages and acceptance/veto criteria. I'll work on this a bit.
How about becoming a trusted editor automatically after having written 1 FA / 5 GA / 50 DYKs? No need for community voting, just show quality content that was recognized by others (FAreviewrs/GAreviewrs/DYKreviewers). In addition, the trusted editor would be trusted only for content areas recognized by appopriate WikiProject tags in content he has created. The approval procedure could be held on a given WikiProject talk pages (and perhaps centralised via transclusion to a more general forum), and the voting should be open to members of that WikiProject as well as all other trusted editors. As such, the approval would be discussed by experts (specific and general), with little chance of the status being disrupted by wikipolitics or trolls. --Piotrus 22:04, 26 November 2009 (UTC)
No. Writing X many of this that or the other doesn't necessarily correlate to trust in editorial approach generally. It doesn't evidence appropriate neutrality on pet subjects, nor talk page discussion approaches. Nor is the reverse true; not writing these doesn't in any way deny trust. I'd take high level content as evidence, but other stuff counts under "trust". I'll try to write up a brief idea on this in a bit.
FT2 is right that some of these processes are a little too vulnerable to whims and personal opinion. The reason that FA's are great is because they're the only status that's given by consensus. I've actually seen apparent GA's try for FA and get slammed hard, with people arguing they should be demoted.
Not to say that FA should be the only way to tell if an editor understands quality. But going with other measures could make it possible for a faction to pump up the credentials of their own narrow-minded editors.
I like the overall feel of this. I think we can work out details of what percentages are appropriate. Can someone tag this as something we want to further refine?
Non-anonymous experts are of source (very) desirable. But that doesn't mean they always make desirable editors. There are non-anonymous experts who can't "get" the idea of how we edit, can't collaborate, can't handle NPOV, have their own (non-policy) view on sources, write fine on their pet topics and edit war on others, .....
I wouldn't recognize expertize in this idea. Not because I'm anti-expert (I'm not) but because even in scientific publishing, expertize does not necessarily mean balance, neutrality, non-fringe, appropriate conduct to others they don't agree with, and the like. An expert needs to be able to show these basic editing skills like anyone else, because Wikipedia is not a publisher of original research and so on. This is about recognizing people who know how to edit and are accepted as good at doing so. Nurture those, and the project benefits for all -- including for experts.
I'd tell the expert that (like joining any new project) they need to learn how to edit Wikipedia, which will be different from how they edit their own papers. But they can edit freely, and (since they are bright and used to scientific collaboration) they'll surely be recognized as a trusted content editor for our purposes nice and quickly, if they take a few minutes to understand how we work here. In fact I'd make that part of the "New user wizard" ("Do you have formal credentials in any field you plan to edit?" + guidance).
Note: there are also "less formal" credentials that work, i.e. while Master Gardener is a formal credential (that I have), many people don't know or understand the certification. Somewhere can we start culling a list of potential certifications for admins to utilize in screening?
Grrrr: LT has eaten my post twice, and apparently I didn't learn the first time, to compose in OO :-(
So: suffice to say, very quickly .... based on my own experiences as a new editor, I can tell you I would have found trust-labelling-of-editors very very helpful. So yes, I can imagine it being useful for non-new-editors, and it would also be useful for new people. It's a good idea.
I would however make a plea for trustworthiness being automatically derived out of on-wiki actions, rather than being manually assessed on a case-by-case basis. 1) It scales better, 2) it eliminates popularity contests, and 3) it prevents people needing to waste a bunch of time debating particularly controversial / edge cases. I know there are lots of challenges inherent in an automated approach, but I think the benefits would outweigh the drawbacks.
I want to think carefully on this. You wouldn't add a post like this lightly, so I'm going to think on it.
The first thought is roughly, keep it simple, and using automation to cut down the number of cases that need more than cursory human review (80/20 rule).
However, the benefits of something like this flow from the idea that some users can generally be trusted to do right, in article and article discussions, of any kind. Those are the users we want in this pool, because once identified, they provide a large population of experienced content writers not needing much guidance or checking, and capable of being given heavy disputes to put into good editorial order.
I don't see a way to judge it on any automated basis though. The benefits here flow from their ackowledged trust to do right (broadly speaking) on any content matter, up to and including self-management of bias, interaction style, and the like.
That should be what we're guiding people towards. That's actually what we want and need -- and it's not hard to do either. It'll become a norm that people'll want to get it. Make it valuable, and people will value it. But I don't think it's open to easy automation. It would probably be easier to design a system with a simpler human element instead. I agree the intensity, drama, and diversion of communal resources seen at RFA isn't where something like this or on this scale would end up. Probably very different.
I'll think about it harder, but that's my initial thoughts why I've positioned it as I have.
I agree about not automating this process. I also agree about popularity contests, thanks Piotrus. So the admins surveying these users themselves have to be pretty much beyond reproach. Perhaps a pivotal point in quality control is how Wikipedia "approves" and trusts editors? Perhaps another pivotal point is the actual "structure" of this process? I apologize if what I am saying is obvious and derived from all of your statements -- I am trying to get us to bullet points that become more cohesive as we work.
Hi FT2.
I think I may have overstated the strength of my position in my earlier post; I was having some LT problems, which made me, in the end, probably overly succinct :-)
So let me make a couple of longer comments now:
It sounds to me like what you want to create is a marker/label for people who are particularly trusted to have good judgment. Probably these would be people who've been around for a while, and understand the policies well, who are reasonable and thoughtful.
I think that's a great idea. I think new editors would really appreciate being able to tell at-a-glance if an editor they didn't know was someone they should trust and listen to. I think also that one of the big points of pain for new people is that when their edits are reverted, they automatically assume "the Wikimedia community" has rejected their edit --- they don't understand that it's the act of an individual, and not necessarily a good or wise decision. If some editors were labelled as particularly trusted then A) new people might not be so quick to assume that everyone speaks with consensus authority, and B) they might actually be motivated to seek out advice and counsel from the ones who are specifically labelled as known to have good judgment.
I can also see how labelling-of-particularly-trusted-editors would be helpful for other experienced editors -- for the ArbCom, for people who do OTRS work, and so on. I don't have that kind of experience myself, but I can imagine how this would be useful for the people who do.
(((Basically, we currently all have our own informal mechanisms for assessing people's reputations, and learning who to trust. I trust people I know personally, like Philippe. And I trust people who my trusted people trust -- like, I first began trusting you, because Jimmy told me I could :-) But that's limited: it doesn't scale very well, and it takes a long time for those networks to develop, which makes things especially hard for new people.)))
So I think your idea makes a lot of sense: it would enable trust to scale better than it currently does.
So, after thinking it through some more, I have two comments/suggestions for you:
1) I worry about the word “trusted.” If some editors are labelled as trustworthy, by implication other editors will be seen, or will feel as though they're seen, as not trustworthy. Which I think would make lots of people feel bad, and could make the “trusted” people targets for envy and anger.
In my old world (journalism) we used the designation "senior editor" for exactly this kind of person, and I'd recommend you think about using it for this. A senior editor in a newsroom generally does the same work as other editors, but the designation is an acknowledgement, and a signal to others, that they are especially seasoned and credible and wise. It's a label that new people would understand. And it wouldn't alienate other people – they can still be good, constructive, useful editors, and they can aspire to earn senior status without feeling diminished by not having it.
2) I really worry about the system for attaining this status being gameable. I don't think you could afford to have an open voting component, because there will always be a few trolls and cranks who are super-motivated to game the system, and practically anybody can rustle up a few dozen friends to help them do that. I worry that if there's a voting component, it'd be a magnet for posturing and rabble-rousing and drama, which would end up wasting tons of good people's time.
I also think voting might make it impossible for anyone who edits controversial topics to gain this status. That would be bad, because as I understand it, some of our best and wisest editors focus on trying to bring neutrality to difficult topics. In a voting-based system they would get voted down, I think, by power blocs of editors with strong POVs.
So I don't think voting would work. You want these decisions to be made thoughtfully, by experienced people. So I would suggest this instead: I think you could have an (automated, low) bar that people need to reach before they're considered for this status. Like, maybe one year of editing experience, and a minimum of 200 edits. That would screen out people who simply haven't been around long enough, or edited enough, to have developed a good understanding of the policies. And then I think you would need a trusted team of people, who would investigate people's edit histories on a more qualitative basis, and seek out people who are particularly thoughtful and wise and constructive.
Who could make those decisions? I don't know the internal workings of the projects well enough to say – but I would guess the ArbCom could, or the Arb Com could nominate a group to do it, or the community could nominate a group that the Arb Com could then approve. Or maybe the ArbCom could nominate the first dozen senior editors, who could then set up a system for expanding their own ranks. It seems to me that the kind of people who create Featured Articles would be a good starting point – but like I said, I don't know the internal workings well enough to really say.
Sorry this is so long, but I hope it's helpful. It's a good idea: I would really like to see it work :-)
Agreed on the controversial topics issue. I've seen reasonable people caught in the middle, and ending up with nothing but harassment (from both sides) to show for it. The more we can rely on other measures the better. The human component can be there as a screen, as a veto, but people should really be judged by accomplishments that the widest number of Wikipedians cannot deny.
Framing is important, so I support the "senior editor" name.
ArbCom doesn't have time to deal with that stuff; instead, how about we make all editors who have written 2+ FAs "senior"?
I think that's a good place to start. Simple, objective, hard to game, but achievable within 6 months to a year if you want it.
Urgh. No.
Good content writers may or may not be good at interacting and working with other users, may not be good in other topic areas, and so on. It has its fair proportion of users who couldn't meet the kind of role we're talking about, though.
I'd suggest if anything this:
Users will be assessed on trust in their editorship. They must submit a portfolio of significant experiences and skills covering:
- Basic and peer reviewed article writing - typically at least 10 non-stub articles and 1 GA/FA.
- Specific editor skills - responded on at least 120 noticeboard issues spread across the major noticeboards including FRINGE, RS, NPOV, BLP/N; COI/N; EDITWAR/WQA, content RFC/3O, xFD (including at least one "rescued" AFD), plus basic template skills.
- Peer review skills - typically at least 5 GA and 2 FA reviews
- Collaboration skills - significant active involvement in a Wikiproject for at least 2 months, or equivalent.
- Editor dispute skills - addressing 10 or more disputes with a mix of amicable and hostile/improper editorship.
- Your own showcase - at least 3 items (not otherwise used here) that showcase your interests and abilities in any wiki area. These could be unusual or interesting content or editorial matters, media work, admin or patrolling, or any other area - your choice!
Users wishing to object must show diffs that clearly evidence any of the following in the last 9 months:
- Two or more instances of clear poor judgement (not just legitimate disagreement) related to NPOV, OR, CITE, V, RS, or COPYRIGHT.
- Two or more clear demonstrations of personal attack, attacking a person not the evidence, a thread in which the user fillibusted "gamed" edit warred or obstructed consensus in an unreasonable manner, or threats.
- A history of poor xFD or other content process contribution
- A pattern of undue serious incivility covering at least 5 instances.
- Evidence that the portfolio grossly misrepresents their content editorship.
- Any blocks or other formal warnings or sanctions by an administrator.
- Gross bad faith, breach of trust, deception, or any access removal related to poor conduct (including but not limited to puppetry, faked content or citations, concealed POV warring, conspiracy to disrupt the wiki, and the like) - without time restriction.
Any diffs should be self evident, with minimal context or explanation, and clearly show the behavior concerned.
Claims (portfolio or concern) not evidenced as above are disqualified except in exceptional circumstances.
That allows users a rough criterion for "evidence that should be publicly shown".
It's not even that demanding - 10 basic articles, one decent peer-reviewed article, half a dozen peer reviews, a couple of days work on noticeboards (to show specific areas), and some dispute resolution and collaboration.
I don't like being prescriptive, I think it's gameable, but it's still better than 1 - 2 FAs. It'd probably work.
I'm not against formalizing the status of quality users (the German wiki has already experimented with something like that in the form of 'quality revisions', revisions that can be flagged by experts only). However a user that is an expert in Indian cooking isn't an expert in quantum physics, for example. The most it would give us is a very general indication if we can assume good faith in a certain contributor.
The essence of wiki working is, that certain approaches are key. Approaches (or their lack) aren't the same as expertize (or its lack).
If given the choice of an expert who could not show good editorial approaches, or a good editor with the right approaches who lacked specific topic expertize, then for this project, choose the latter, not the expert. Why? Many reasons:
- The expert (in such a scenario) may have his own foibles and bias, or unwillingness to hear others, or non-neutral stance
- The expert doesn't know how to collaborate, or work with others in a mass edited project. As a liability he may drive others off and absorb immense time and harm the communal fabric. We get one perfect article (only we don't know if it's biased because every community dialog about it descends into argument and name calling) and not a lot more.
- The non-expert with good approaches will listen to others, consider the views, research them and check the details. They may not know, but they know how to examine others' work and check facts. They foster others to work with them and as a community the work gets done to a high standard even so.
Part of Wikipedia is that although we want high quality, we aren't a cutting edge academic source. We'd like to have some of that, but it's not (as I understand it) our actual core goal.
If we're assessing what kind of editor can be broadly trusted to work on all kinds of difficult articles unsupervized and do so in a proper way (as this thread considers), then the qualities I've outlined will get you that person, and fairly high quality (though not cutting edge) writing. A PhD won't.
$0.02 :)
I agree, but let's not call them 'experts' when in fact they're rather 'good editors'. I like the idea to create some kind of special user status whenever an editor reaches the requirements you mentioned (06:09, 27 November 2009). What do you think about quality revisions though? It's also a way of showing the reader how the best editors rate the quality of an article.
I don't necessarily see it as better, but I am not opposing making this more complex (still, I like KISS...). Anyway, regarding opposing, I would strongly suggest taking editors activity into consideration. In other words: editors who are very active and have edited for a long, long time (and logically, would be likely to be good or trusted editors) also are more likely to have enemies, or at least, more "exception-to-the-rule" dirt that can be brought up against them (lookie, warning from 2006, PA from 2007, ArbCom from 2008...etc.). As such I'd suggest that any assuming the editor applying has been reasonably active in the past 6 months, examples (diffs) of poor judgment should be not older than that period. --Piotrus 05:22, 30 November 2009 (UTC)
- My own view is that most "professions" have some kind of "continuing professional development" post initial qualification. While wiki isn't a "qualification", we could well ask "what are we offering to users to stretch their skills and as a means of self evaluation and development as editors". Something like this, a "recognized wiki-editor qualification", would be good for the ethos that way too.
- Piotrus - the "reasons to object" were crafted as requiring both specific types of bad activity or judgment to be shown via evidence alone, within a time limit of the last 9 months, as drafted. With luck that solves your concern?
Long thread, quick replies:
- Per Piotrus: Never underestimate the importance of choosing terminology well. "Senior editor" works just fine for me. We can debate terminology if this goes ahead, but the basic point's good.
- A panel's fine, but you hit the old problem then: this is a panel that indirectly controls who's designated as a "senior editor". So that becomes a focus of allegations, games, and so on, as Arbcom can be. We know where that path goes, and if avoidable, let's avoid it.
- This area's easier than Arbcom because editorial behaviors are almost all public record (even alliances emailed in private become obvious on wiki a lot of the time), so bad conduct's visible. The community was founded on open decision making, and for all other senior roles, it works just fine. Admins, arbs, both done by the open community. let's see if we can avoid losing or diminishing that. It's part of the "trust" model to trust the wider community where we can (with suitable precautions).
- Instead of a panel, or trying to be super-ideal on selection, we can have a "pretty good" selection, if we also have ways to effectively catch the exceptions. Don't let the minority fringe case distort what's fine and simple process for the majority of cases. So we might back a 95% valid (but slightly able to be gamed if determined) nomination system, by also having:
- A clear and standardized removal process
- Perhaps some kind of scrutineers panel who can review cases claimed to be grossly affected by bias and canvassing, or where the results don't reflect appropriately on the user.
- Arbcom's definitely the wrong ones for this.
Agree with FT2's no 3, I think a pannel isn't necessary. The arbcom isn't supposed to be involved in matters of content anyway (at least, the Dutch one isn't, I'm not totally sure about other arbcoms). Let's trust that, after a careful analysis of the portfolio and hearing the opinion of at least, say, three other 'senior editors', the community is able to choose a new 'senior'. Having the community choose them is more in line with the spirit of Wikimedia projects than having an elite pannel.
I've had a go (per Piotrus/Sue Gardner) at designing a hybrid approach. Rough concept is here, and please comment on the talk page if it's too off-topic for this thread.
It doesn't have to be "perfect", but it should be hard to game and fairly good for identifying good quality content editors, simple, and low overhead on individuals and community.
The key aims are automation, low gameability, simplicity of experience to users, very low scope for politicking/dramatizing/popularity contests, and low time needed by participants. I feel very strongly that automation alone (metrics for "trusted users") aren't viable, despite Sue's valid point. What we can easily do with existing tools is streamline it so far, that it's almost as efficient and substantively keeps all the benefits of both.
This one's a concept (rough only I'm afraid) - a hybrid of enwiki Mediation Committee's nomination method (demands filtering of good quality users and operates historically with no drama whatsoever) and a modification of the SecurePoll tool already in place.
That's the direction I'm thinking. It's a bit more involved than 100% automation, but it is simple (once set up) and keeps almost all the benefits of automation, all the benefits of user involvement, and very little of the drawbacks of either, when merged.
One thing I strongly encourage people to keep in mind: any system that is set up will be gameable. All of them. When I was doing corporate training, we had a rule that you "train to the norm, not to the exception." The idea was, of course, that you write a process or a training scheme that will work MOST of the time. Someone's always going to be an exception. Someone's always going to game the system, but if we can make it work 90% of the time, that's good enough.
The perfect is the enemy of the good.
Covered. I think I said almost the same above - you design it to be 90 - 95% good,which means it's slightly gameable. But you counter that by making sure removal is also to-the-point, and some kind of scrutineers exist for "surprising" results where there is a widespread suspicion of gaming or undue conclusion.