Quality of Articles and Quality of Sources

Building in a "rate this article" is essential. Flagged revisions has something like this, I think, but it's capable of being a separate tool as well. Disagree with rating editors. Although in an ideal world it would help to do this, in the real world it's incendiary and adds a pressure to "game" - either to improve oneself and friends or (worryingly) to discredit "opponents". More harm than good.

A recognized sourcing index might be an interesting idea, but disagree. The reasons are interesting though.

  • Outside scientific literature, many sources will contain both good quality and poor quality content. Generalization's hard.
  • A huge part of quality depends on the bias and writing of the article. A common feature in edit wars is to stuff articles full of a dozen cites to "prove a point". Dangerous to then rate articles purely on the repute of sources for the cites they contain. Cites can be gamed like nothing else.
  • A lot of work for questionable hard benefit to content.

In a way it's conceptually nice but in practice probably a non-starter.

FT2 (Talk | email)18:44, 26 November 2009

Some of the issues you raise regarding rating editors could be mitigated. If each user is allowed to rate any other user only once, it would be more difficult to improve or discredit other users, without creating many accounts. An editor would not receive a rating until they have x number of ratings from unique users. The scoring could be dropped completely, and only allow the ability to indicate if you believe someone is a good editor. The only way to give someone a negative is not to rate the person, or withdraw your rating, if that was an option.

I can see where building a catalogue of rated sources could be a challenge, but I think it is worth exploring.

MissionInn.Jim19:54, 26 November 2009
 

FT2 - It would seem to me that your discussion about Benefits of having "trusted / high quality" user recognition an argument in favor of rating users? How would you arrive at trusted / high quality users if they were not rated in some manner?

MissionInn.Jim20:02, 26 November 2009

I wouldn't have a "bare numbers" rating system, like "how many users like/don't like this person". I would be wary of trying to deduce automatically from their editing the quality of their work. I would expect a formal rating system would be a target for gaming.

The approach in the other thread is different. It assumes one "level" that's granted or not, rather than a "rating system", and relies on review and discussion of their editing conduct not automation (possibly with weight to other trusted users per Piotrus).

In that approach (crucially) users aren't being "rated". It's a means of recognition of trust. A user who "sometimes" edit wars but "mostly" edits well, or "usually" adds cites but "a few times" has acted improperly in content work per consensus, doesn't get a "slightly lower" rating. They get no "trusted content editor" standing at all. Not till the community considers their content work and interactions on content are consistently appropriate and consistently of a reasonable/good standard. Which is what we actually want to see.

FT2 (Talk | email)21:21, 26 November 2009

I agree completely with FT2 here. But I have to add, I am uncomfortable with and resist a "customer satisfaction" approach. I like the basic model, Wikipedia is the encyclopedia anyone can edit at any time - this means our consumers are also our producers. The issue here is not so simple as our providing a service to consumers. The problem is this: those people who come to Wikipedia because they do not know anything about Hegel simply cannot assess the quality of the Hegel article. They CAN assess how readable it is, and a comment on the talk page "I do not understand the third paragraph because ...." shoud always be welcomed and valued. So I hav eno peoblem with saying any reader can give us feadback on how readable an article is. But the only way to know whether the article on Hegel is really good or not is for an expert on Hegel to say it is.

I am not calling for some boad of experts to rate articles.

This is my main point: One index of Wikipedia's poor standing is the number of university professors who discourage students from using Wikipedia as a source. I think one reason why Wikipedia has quality problems is because too few of our editors are experts (e.g. university professors) on the relevant topics. When more professors are editors, more professors will judge article content highly and encourage their students to use it.

So I see the real problem as in the recruitment of experts as editors. University professors are used to writing things without getting paid; some will not edit Wikipedia because they hate the fact that their work will be edited by others - I wouldn't even want such people contributing to Wikipedia. Many more simply are not used to writing something collectively. But I think most academics do not contribute to Wikipedia because they are too busy and receive no recognition by their employer for contributing to Wikipedia. I do not see any solution to this.

But the fact is, more and more university professors are contributing to Wikipedia. As Wikipidia has grown, so has grown the number of academics contributing. But I bet that users have epanded exponentially but the number of expert editors has expanded arithmeticaly.

I think we need to find ways to recruit more.

By the way I use academics as the example but I mean of course any kind of expert. Slrubenstein 14:26, 8 December 2009 (UTC)

Slrubenstein14:26, 8 December 2009
Edited by another user.
Last edit: 12:33, 9 December 2009

The poor state of the verifiability principle is probably the main reason why Wikipedia isn't seen as a trustworthy source. The problem is not the quantity of sources, but the quality of sources and the balance between them. Woodwalker 12:33, 9 December 2009 (UTC)

213.213.172.25412:28, 9 December 2009