Defining quality

I think there are two main kinds of metrics that would be useful to editors and to improving content, one being crude and probably machine-manageable.

On a crude front, we can attach logic to the key tags:

(Article NPOV = X. Inline cite missing = Y. several inline cites missing = Z. Word to reference ratio over 20 = P. Word to reference ratio over 100 = Q. Current edit warring in the past week = R. Plus evaluating a single rating from these.)

It wouldn't capture "high quality", but we'd capture basis issues that are a concern and could flag them to the author and the community. If we take care of the worst articles then over time the average will improve. Nobody is more motivated to work on an article than those who have already edited it, so they may be interested in a simple "score" plus information why it's low.

More sophisticated metrics are hard. I'd be looking for projects to define a few standards on articles (newly created - baseline acceptable - GA - FA), and then measure quality in terms of average time taken for new and existing articles to reach the next quality level and conversion rates ("what proportion and how long it takes"), user feedback, and stability.

That said, the "low fruit" is appealing -- metrics relating to substandard articles that don't meet a agreed baseline for quality, or measuring how long they take -- because there's lots of them, they make a big impression, they are easy to identify and quantify the issues, and they are easy to fix. Maybe for now, we should recommend focusing on that.

FT2 (Talk | email)16:46, 26 November 2009