Defining quality
I'll try to summarize this thread so far:
- We've found that quality isn't easy to measure by simple metrics, perhaps it's impossible unless we would have some form of feedback from the reader.
- We've come across some nice ideas how this feedback can be obtained. Important also is to know who gives the feedback (expert/interested person/school kid?).
Besides, Piotrus suggested Wikiprojects can play a part and we need more; Bhneihouse suggested Wikipedia can only become a quality brand when there is a consistent basic level of quality across Wikimedia projects (I assume this is not just about Wikipedia); FT2 thinks our recommendations should be in a realistic form that has a chance of being accepted by communities (given the indecisive nature of discussions in the larger communities).
I suggest this feedback thing could become our second practical recommendation (after 1. creating more manuals/wizards).
...FT2 thinks our recommendations should be in the form of realistic suggestions likely to make the biggest positive effect on quality as they perpetuate, allowing for where the communities are today.
Regarding 1) I disagree with that measuring quality is impossible; rather, there are several different metrics to doing that, and what may be impossible is selecting the "best one".
It depends. I think having clear, subjective feedback potentially really helps, but this exact type of feedback hasn't been used in statistics or project models yet. In such a situation there is no way to measure quality factors like "article completeness", "balance", "structure", etc. The only way in which the statistics currently are able to measure quality is by looking if an edit got reverted and comparison with the editor's other contributions. This tells us very little since the revert can (as far as I see) have 17 valid direct reasons, be neutral to quality, or/and even have a negative direct effect on quality (in 17 possible ways).
Per Woodwalker. Metrics we could develop include:
- Crudely measuring things like user tags, word to cite ratios, stability, and the like.
- Reporting user feedback.
- Setting standards for articles (newly created | baseline quality | good | featured) and measuring time taken and progression rates between these.
These aren't bad by themselves. But I'm not aware of any way to calculate useful metrics for genuine quality aside from those things.
I just ran a reader survey on five articles that I wrote or had a hand in editing. I found the results to be extremely enlightening and helpful in my work as an editor, and also as an important input to policy disputes in the project I am working on. You can see the results of the survey here.
I performed the survey by creating the survey form at www.surveymonkey.com, and attaching a link at the top of each article (see, for example, this revision of one of the articles.
As an editor, I would love a tool that I could use to develop a survey with article-specific questions, attach it to the end of an article, and analyze the responses.
In numerous other forums, I have pointed out that Wikipedia is an editor-centric, rather than reader-centric, institution. All the mechanisms and rules of behavior are designed to create cooperation of a community of editors. In this dynamic, the reader is most often shunted aside; to the extent that, when I proposed my survey, there were editors who clearly didn't want to know what their readers were thinking.
This is something that has to change if Wikipedia is to move forward, and that change will occur only when features of the editing environment support the change. That is why I think a tool like this would be invaluable, not only to me but to the entire Wikipedia weltschaum. --Ravpapa 15:09, 8 March 2010 (UTC)