Summary:Talk:Task force/Wikipedia Quality/Quality of Articles and Quality of Sources

From Strategic Planning
Starting point

MissionInn.Jim suggested users could rate articles (more weight to recent ratings), and that a Wikibibliography of sources (also rateable) might be valuable. Articles could then be rated based on the quality of sources used. Similarly editors could be rated too, based on quality of edits.

General discussion

FT2 agreed with rating of articles by users, but disagreed with rating editors due to incendiary potential and pressure to "game" (make oneself and friends look good or worse, make opponents look bad). He felt a sourcing index might have issues too: sources contain good and bad material, making generalization hard, cites can be gamed like nobody's business, overall a lot of work for questionable hard benefit.

(Slrubenstein states he "agrees completely with FT2 here" but it's not entirely clear what this relates to, and it might relate to the distinctions noted below on "trusted/senior editors")

MissionInn.Jim suggested editor rating issues could perhaps be mitigated, and a catalog of rated sources "could be a challenge, but I think it is worth exploring".

Article rating, editor rating, and experts

Slrubenstein notes that our consumers are also our producers, meaning we are not simply "providing a service" as such. Users who visit to read an article may not know enough to assess it for quality. His main point is:

"One index of Wikipedia's poor standing is the number of university professors who discourage students from using Wikipedia as a source. I think one reason why Wikipedia has quality problems is because too few of our editors are experts (e.g. university professors) on the relevant topics. When more professors are editors, more professors will judge article content highly and encourage their students to use it".

Slrubenstein noted that while more professors (etc) are contributing, the rise in non-experts/non-academics is much faster. We need (he feels) to get more experts on boat to balance the community and improve our ability to rate articles to a high standard.

Piotrus stated that users/editors rating articles is good, but needs care what to do after the page is edited - and especially after "major edits".

BarryN (Bridgespan) stated that rating content was "a really productive area" and agreed there were good ways to crowdsource suitable quality information. One approach would be to obtain simpler feedback and have a team correlate the feedback received with expert assessments, which would (probably) quickly allow a "simple content rating tool" to be set up which would partly compensate for the lower proportion of experts.

He also thought that such a tool might provide a basis for user rating, basing user ratings upon the changes in quality in each of their many edits in some manner; "As the ratings are generated for each article, they could become part of a portfolio that provides for recognition of the contributor's work. [T]his would have positive synergies with the community health work as it would reward positive contributions more clearly".

Randomran cautioned that "making it too subjective will just make it gamey. You'll see different political ideologies, different religions, using it as a way to express disapproval over the perceived "bias". ... that's if they haven't already gotten there first to give the article a ten, and use it as an excuse to prevent the article from improving. ("My 'Criticism of Barack Obama' [article] was rated a ten, so you have no right to start changing it")." He feels rating would be "a bad idea if there aren't some checks and balances". Woodwalker felt that feedback on different areas would allow the POV data to be separated from other signals.

Brya also agreed that any kind of rating system would be "gamey". "[T]he emphasis should be on reader feedback (readers outnumber editors by a huge margin), not on ratings by users, but... A very likely scenario is that articles that get good ratings will attract attention from editors, with deleterious consequences."

Editor rating and trusted/senior editors

MissionInn.Jim asked FT2 how disagreement with editor rating was consistent with the idea (elsewhere) of recognizing trusted/senior editors ("How would you arrive at trusted / high quality users if they were not rated in some manner?").

FT2 clarified that rating editors automatically or via a formal schema would be a target for gaming. But a "trusted/senior user" system would be just one level that's granted or not:

"Users aren't being 'rated' [in that proposal]. It's a means of recognition of trust. A user who "sometimes" edit wars but "mostly" edits well, or "usually" adds cites but "a few times" has acted improperly in content work per consensus, doesn't get a "slightly lower" rating. They get no "trusted content editor" standing at all".
Sources, cites, and trust

Woodwalker states that "The poor state of the verifiability principle is probably the main reason why Wikipedia isn't seen as a trustworthy source. The problem is not the quantity of sources, but the quality of sources and [their] balance".