Proposal talk:Algorithm to assign an estimated credibility to articles

From Strategic Planning

Stamping is good

If it is volunteer and not involved with any special process. A Dr can get a stamp administration facility enabling him/her to stamp an article in order to signal factual correctness. Its kind of an informal and chaotic "peer review", although there is no process to keep the article that way except the honesty, civility and negotiability of the editors. Rursus 06:53, 19 August 2009 (UTC)Reply[reply]

The fact-checking regex AI is science fiction, and tying credibility to editors is incompatible with Wikipedia's democratic/anarchic editing model (WikiTrust is cool stuff, but I don't think it can be used to automatically assess credibility of facts as "ownership" of information is easily lost or changed during page reorganization, copyedits, edit warring etc.), so tying credibility to reviewers should be the way to go. With FlaggedRevs some of the required technical architecture is already in place. --Tgr 16:35, 20 August 2009 (UTC)Reply[reply]

Just a note, I think Wikipedia is neither democratic nor anarchic, and I think randomly stamping don't interfer with the current consensus-building model. Otherwise I agree with your concerns for the current Wikipedia model. Rursus 20:54, 20 August 2009 (UTC)Reply[reply]
My opinion about this is that a AI may give a "credibility indocator", that could be a good thing, in function of these aspects (speliacists reviewing, references,...) but it could only be a quantity indicator and it could'nt be enough. First other thing is that we can use the same process as in translations, with steps: beginning of article, complete article, referenced article, reviewed article.
And above all things, I think that the goal of Wikipedia and what Web learns to us generally is to find the information we want, efficiently and credibly. It means that it should be the human brain which should be able to understand whether an article is credible or not.
For Wikimedia, what is important is to tell everybody what makes good information and what shows it is not. For me, what is important is the references, anybody who is interested in a particular aspect of the article can read the source. I think, to bind with the qualified authors, that it sgould be easier to see who write the article of a part of. We should have a tool which allow anybody to see who is the last who wrote or modified this or that part of the article. For example a "history" thing for each paragraph. And the abilities of people could appear on their profile. It will still be on contributor's responsibility, anyone could lie about him/her.
But, I repeat, the important thing is to make people find whether information is credible or not.
The best way to improve the credibility is to explain the system of checks and balances. Peer review is helpful if the peers are credible. In order for the reader to know this they will have to understand the peer review process. There are some things that can be done to explain this but part of it will have to be done by the reader to familiarize themselves with the process. Zacherystaylor 15:33, 27 August 2009 (UTC)Reply[reply]


This proposal sounds self-defeating. Any kind of comment other than being enthousiastic and supporting is likely to be taken as a critism ("being negative") and as a reason for war. At present there is no assured way to put in any kind of evaluation, without it being wiped (quickly) by the proponents of the relevant project.

For this to work there would have to be a way to make comments that are protected from being edited by somebody else (such as at Amazon or IMDB). - Brya 16:02, 27 August 2009 (UTC)Reply[reply]


Some proposals will have massive impact on end-users, including non-editors. Some will have minimal impact. What will be the impact of this proposal on our end-users? -- Philippe 00:07, 3 September 2009 (UTC)Reply[reply]


I think that this could be a good proposal, server costs notwithstanding. But I think its stated purpose is exactly wrong. When it's really important to get the right information you can't rely on knowledgeable volunteers let alone machine heuristics; you need to talk to a real expert. This proposal shines in letting people judge the credibility of less-important subjects at a glance.

In short,

  • Really important information: Talk to an expert (expensive and time-consuming).
  • Important information: Read the article, then verify by checking references (time-consuming).
  • Less important information: Read the article, possibly checking CredibilityBot (quick).
  • Unimportant information: Read/skim the article.

CRGreathouse 02:13, 22 September 2009 (UTC)Reply[reply]

Original Research

A source "stamping" text on a page as correct, whether or not they are "credible," seems equivalent to original research: there is no way to verify it is correct other than to trust the author's knowledge. Surely there may be sources in their field a Biologist is drawing knowledge from. In that case, the Biologist should cite those sources, and not themselves.

Pages should be taken to be credible because their content can be backed up with evidence. Whether someone is an authority or not is disconnected to their accuracy (For example, experts in biology can have different viewpoints on a biology subject). Wikipedia is in the business of evaluating information, not individuals. Let's keep it that way. --Lyc. cooperi 08:32, 1 October 2009 (UTC)Reply[reply]

Rating references through voting

One could allow editors to vote on author and/or literature credibility. This way the credibility of an article could in part be computed from the credibility of the authors and publications referenced (See: Proposal:BibTeX database and Bibliography namespace). --Fasten 14:10, 29 October 2009 (UTC)Reply[reply]

But I'll give this proposal a "low" priority. --Fasten 14:16, 29 October 2009 (UTC)Reply[reply]

Credibility and credentials

The proposal makes the same mistake as virtually all others trying to establish a "credibility system": It equates credibility with credentials, i.e. academic degrees.

The problem is that so many scientific theories, especially those on the "bleeding edge", are actually highly controversial, especially within the scientific community. Take swine flu, for example: There are about as many opinions on how it will turn out as there are scientists dealing with the issue. Giving scientists "credibility bonus" just because they are scientists will allow POV-pushing PhDs to overrule faithful Wikipedia researchers with their "credibility", resulting in a loss of quality.

When is an article about a medical subject accurate? Take the swine flu example again. Of course, such an article would have to incorporate information about the swine flu vaccination. In Germany, the vaccination is administered along with a chemical amplifier designed to increase its potency. Does it actually do that, though? About half of the doctors would tell you that it does, while the other half would claim the opposite. So which is it?

This shows the fundamental problem: The relevant Wikipedia article should report on the controversy surrounding the amplifier, rather than contain one of the two points of view. Every M.D., however, is likely to hold one POV or the other. So who's more likely to write a good article on the swine flu vaccination, a medical doctor or an experienced Wikipedia editor who is a layman? Paradoxically, it appears that the latter should be preferred.

The only thing that can lend true credibility to an editor is his editing history. "Has he cited his sources well?" "Has he ever tried to push a POV?" Those are the questions we need to ask. By contrast, the question "What academic degree does he hold?" is highly irrelevant. Credentials have nothing to do with credibility. Compare en:Jan Hendrik Schön. -- JovanCormac 14:56, 23 November 2009 (UTC)Reply[reply]