Talk:Task force/Community Health/Survey
|Thread title||Replies||Last modified|
|Pie charts||8||08:26, 1 February 2010|
|open-ended questions - a gold mine of data||2||23:18, 31 January 2010|
|how many edits did you make in a typical month||0||10:16, 30 January 2010|
Last edit: 22:57, 30 January 2010
The pie charts are misleading, the percentages do not add up to 100%. Please replace them.
Wasn't logged in: Paradoctor
In most cases the pie charts represent the values (number of people answering) rather than the percentage...
Yeah, but the numbers don't add up, either! ^_^
Take question 5, it lists 2057 replies and 192.43%.
I'm going to redo that one - I suspect it's because people selected more than one answer, but I don't know for sure because I didn't administer the survey. In looking at it, though, I noticed that one of the values was left off, so I deleted the chart until I can re-do it. :)
The rest is adding up to 1069 and 100%, respectively.
OK, I'm now looking at a copy of the survey - for Q4, Q5, Q9, and Q10 they were instructed to choose at most 3 answers. for Q12, they were instructed to select all that were true.
So, that explains the discrepencies, I think...
Yeah, for those multi-answer questions, a simple bar graph will be more appropriate.
The survey was designed with several open-ended questions that permitted editors to express themselves freely. This will be hard to quantify. But I suggest two things:
- We have some person or group go through the actual surveys, and figure out a way to get a representative description of what people were actually talking about.
- We use some kind of information parsing tool like this one to look for commonly used words. (Or phraselets, ideally.)
It will be important to parse the information first, though. Don't just want to jump in and read all the surveys. Want to look at editors who gave specific answers, to understand what those answers meant. Want to look at editors who had a specific number of edits, to see how new users and experienced users differed.
Yep, we're gathering those as well. We have to be careful how we use those, because we promised that they would be shared in the aggregate only, so figuring out how to handle that will be important, in order to keep our commitment to the respondents.
That's not too hard. Just pick a constant (e.g.: say, all the editors with more than 1000 edits, or all the editors who thought that complexity was a factor in why they left) and pick a random sample of 20 surveys. From those 20 surveys, aggregate their answers about their best/worst experience, and the "miscellaneous question" at the end. From there, we could easily look for common themes, as well as significant differences.
Some time ago I looked into the User creation log  and the numbers there are similar to the ones this survey tells us. Only a small percentage of all logged-in users did more than a few edits. In this survey the numbers are not there, but the most new users don´t edit at all. The User creation log tells even more about our editors, if you look into the edits. One of the things I looked into, was how much time passed between creating a new user and the first edit of this user. You will find, that about the same number of editors start editing within 5 minutes or start editing after an hour and about half of them start editing after 24 hours. New editors are spending a lot of time at wikipedia. Maybe the User creation log could be a test, if the new vector-skin reduces the time new users spend before editing. --Goldzahn 10:16, 30 January 2010 (UTC)