Jump to content

Researchers from UCSD and Stanford Visit the Wikimedia Foundation

Researchers from UCSD and Stanford Visit the Wikimedia Foundation

Last week, some researchers from Cal-IT (University of California, San Diego) and the Persuasive Technology Lab (Stanford University) visited the Wikimedia Foundation. The two groups have been doing research in areas which may benefit Wikimedia projects.

For example, Lev Manovich from Cal-IT has been doing work on visualizing very large datasets (see "Shaping Time"). BJ Fogg, whose colleagues were in attendance from the Persuasive Technology Lab, taught a course at Stanford around the psychology and metrics of Facebook applications (see paper). We invited the researchers in to meet some of our staff, discuss their research and talk about work currently being done at the Foundation. The goal was to determine if there are any topics of mutual interest.

The researchers met with people from the Community Department, Global Development, the Public Policy Initiative, and Technology. We gave them an understanding of our Strategic Priorities, focusing mainly on Participation, Quality, and Reach. We talked at length about current trends in editorship in the Wikimedia projects, how to measure contribution, the role of bots, the Public Policy Initiative (participation, mentorship, article improvement), the challenges of working with MediaWiki data, and a range of other topics.

The Cal-IT and Persuasive Tech researchers seemed very interested in exploring some research ideas that worked with Wikimedia data. I’ve asked them to get their thoughts together in the form of a proposed project list. I’d like for us to do this as openly as possible, so we’ll post this list on wiki once it’s available and ask for feedback from the community. Hopefully we will have an initial list within a few weeks. Stay tuned!

Howief19:15, 20 December 2010

Great to hear of you.

KrebMarkt19:58, 20 December 2010
 

One thing I would like to see some research going into is a karma system.

WikiTrust (automated rating) and Wikigenes (manual rating) as well as Scholarpedia's Scholar Index (manual rating) go in this direction but neither address the issue of expertise. To some degree, Suggestbot (automated after opt-in) captures different kinds of expertise, whereas Flagged revisions, Pending Changes and other forms of user rights control who gets to see and do what.

What if wiki user rights were controlled by something similar to WikiTrust for general editing, and optionally by something akin to SuggestBot for certain types of editing? For scholars, doing this in a way that is compatible with a scholarly author ID system (e.g. the upcoming ORCID) might provide an incentive for wiki contributions.

Another problem that I have not seen a good solution to is that of an ontology that would be applicable consistently across the wide range of topics covered at Wikipedia.

Daniel Mietchen01:34, 21 December 2010