User talk:Scott MacDonald/Pragmatic BLP

Hypothesis. Thinking Wikipedians can agree:

  1. There is "out there" some bad biographical material. By bad I don't here mean breaching some policy: I mean material that is downright libellous, highly misleading, or unfairly prejudicial to a living subject. We'd disagree about the amount of such material, but that it exists at some level is clear.
  2. The main damage lies not so much in obvious abusive vandalism (which is embarrassing to us, but not so damaging to the subject) as in plausible falsity - which is harder to spot and more likely to mislead the reader with regard to the subject.
  3. The problem lies with identification and scrutiny. No one disputes that bad material must be removed, and policy gives us sufficient tools to do that. The issue is having "quality control" systems that minimise the risk of bad material going unnoticed.
  4. In the end it is eyes on articles that makes the difference. However, one can't herd cats - so to change anything all we can consider is systemic and technical changes, which may make it easier and more effective to get eyes on the right articles at the right time.
  5. We have a constant duty to review and improve our systems to improve quality control and minimise risk to third-parties.
  6. Changes may involve some "cost" to the project (e.g. deleting an unreferenced negative BLP may lose us perfectly good material, semi-protecting may prevent perfectly good contributions). We need constantly to be doing a cost/benefit analysis of any proposed change (remembering that risks are borne by third parties here). We will disagree about the metric of the cost/benefit - depending on how serious we view the risks, how important we see the cost, and how much responsibility we think the project bears - but we'd agree we need to do it.
  7. Risk cannot be eliminated - but that doesn't mean we don't try to minimise it. The perfect should not be the enemy of the good.
  8. Those who believe the risk is already fairly minimal should still believe that further minimising it is always a desirable goal.
  9. Those who believe the risk is unacceptably high should still support even small (inadequate) steps to reduce it. Again, the perfect should not be the enemy of the good.

So far, so good? Honk if you are still with me.

OK, next point. My personal view is that we should be willing to pay a high cost to minimise the risks - I'd raise notability thresholds to remove the half of our least maintained articles. However, it is utterly pointless to have an argument about whether "extreme measures" are justified or not, since such things are not going to happen unless something, which none of us can predict, occurs. So, let's move on. Let's start at the other end. What can we work on which might help (even a bit) and have a sufficiently low cost to the project that it realistically might get consensus?

We don't agree on whether unreferenced BLPs are a particular risk, so let's debate that one elsewhere.

I suspect we'd agree that greatest risk is with the BLPs that are least scrutinised. What metrics do we think best identify such articles? There will be no absolute metric here, but how would we assess the following as indicators of high risk of under-scrutiny?

  1. Low number of page-watchers
  2. Low number of page-views
  3. Low number of edits (unlikely ?)
  4. Long-term maintenance warning tags (unreferenced, debatable notability, COI, neutrality?)
  5. Other

OK, if we can work that out, can we do a cost/benefit analysis on what might help reduce risk with these articles? I'd suggest we examine:

  1. Semi-protection
  2. Targeted Flagging (See Wikipedia:Targeted Flagging)
  3. No indexing (is this possible)
  4. Others, I can't think of.
  • My approach here is not to have an argument about ideals - it is to get a pragmatic discussion aimed at getting something agreeable done.
  • It is not an attempt at compromise. Compromise would mean me saying "it's not enough, but I'll stop pressing for more if you agree..." and you saying "it's too much, but I'll allow it if you agree not to press for more...". What I want here is actually an agreement. Something that will not be enough for me, and may be totally sufficient for others - but where most will agree that the thing in itself is good (even if not much good).

--Scott Mac 10:57, 29 October 2010 (UTC)Reply

Initial discussion

edit
I don't have any background discussing BLP issues with you, but I think you're right on track by starting from the 'other end' and trying to find the least costly changes first. That will attract a much better discussion and raise the chance of getting something done. Very pragmatic, and a good idea, too. Ocaasi (talk) 11:08, 29 October 2010 (UTC)Reply
Okay (not sure whether to put this on this page or its talk/flip side) as I have been ruminating on semiprot vs Pending Changes all day and was going to post something this evening when you beat me to it. My algorithm:
  • Process under semiprotection

(1) IP asks to edit semiprotected article (2) In reply, admin or registered editor either unprotects or adds the material him/herself

  • Process under Pending Changes

(1) IP makes edit (2) A variable amount of time later, a Reviewer turns up and reviews material scratching head...

Now in my experience, the IP rarely adds a source. My problem here is that in the latter case, the IP is gone, leaving the Reviewer with (the possibly time-consuming) job of verifying and possibly snooping out a source. Contrast this with the former case, where an admin or registered editor can actually inquire of the requester before the edit is made. Hopefully in this case the person who is actually more likely to have a source can supply it, rather than the other party run off looking. Now don't forget the pages we're talking about covering are (presumably) the relatively obscure and/or esoteric BLPs, a group that is more likely to have obscure sources (you familiar with the Reliable Sources of Bangladesh, Surinam or Swaziland? Neither am I....) - hence the ferreting is likely to be harder...and if not found by google likely not added...and the IP, reviewer AND article creator's efforts are more in vain....Anyway, this is my thinking on how best to cover a great swathe of the less-watched BLPs and why after using Pending Changes I still prefer semiprotection, if not more so. Is my logic flawed in this? Casliber (talk · contribs) 11:46, 29 October 2010 (UTC)Reply

So, in a nutshell, I like targetted flagging but I'd use semiprotection. Almost all these articles are low-traffic, hence we don't risk losing loads of potential editors which might be the case if we semiprotected every BLP on wikipedia. Casliber (talk · contribs) 11:48, 29 October 2010 (UTC)Reply

  • Scott, I think the idea of targeting flagging (I think I know what this means) is a good issue for exploration. If we could run through all unreferenced BLPs, and identify all which contain any specific statements and words from a list we would create (e.g., arrest, murder, sex, rape, criminal, etc., make a laundry list based on editor experience), this would give us a subset of Potentially Problematic UBLPs, and we can create a workgroup to focus on that bunch in the backlog. Whether that bunch needs to be semi-protected when flagged, or blanked, or something else or incrementally minimise risk, would be up for discussion.--Milowenttalkblp-r 13:01, 29 October 2010 (UTC)Reply
    • See Wikipedia:Targeted flagging. I suggest that another metric than "unreferenced BLPs with keywords" would be better. I've actually run through most of the unreferenced BLPs using key words (others have too), so I don't think that metric will yield much now. Further, I think we want to think a bit more than the metric of unreferenced BLP. Unreferenced doesn't mean problematic and referenced doesn't mean fine. Unmaintained articles however are more likely to be unchecked now, and more likely to have future additions unscrutinised. My metric would be "low interest" articles - and my point would be that a long-term unreferenced article is likely to be low-interest. However, a low-interest article remains a low-interest article, even if someone works through the backlog and fixes it. It is just as likely to be underwatched going forward, as anything which is unreferenced. My hope here is to find another metric for "low scrutiny" than the crude one of "unreferenced".--Scott Mac 13:09, 29 October 2010 (UTC)Reply
      • I agree that our focus really shouldn't be on just UBLPs. That has served as a proxy for "the bad stuff" for a long time now, but bad stuff can be in any BLPs. And it can linger on in those we don't watch. For one minimal step, if low-interest BLPs (less than 50/100 views per month?) could be protected from IP edits and limited to confirmed users, that would be a great thing in my mind.--Milowenttalkblp-r 13:26, 29 October 2010 (UTC)Reply
        • Our focus simply can't be on uBLPs only. I've expanded and referenced several lightly watched articles, but all it would take is one person to sneak something in that I don't catch on my watchlist, and we've failed. One thing I might suggest adding to your list, Scott, is the note that being referenced does not protect an article from potential damage. This is why I've consistently argued the need to continue developing tools. Wolterbot's uBLP lists has allowed my primary project to reference 97% of its identified unsourced BLPs. We asked for a tool, we got it, we did something about it. From my point of view, our best way forward is to engage those with the capability to build those tools. Give me a list of ice hockey articles that include potentially contentious words, and I will check them. Give us a tool that scans new edits for unreferenced additions using potentially contentious words, and we can dramatically improve our ability to revert these dangerous edits quickly. Filters and anti-vandal bots have greatly increased our ability to protect BLPs. The more we develop these tools, the better we will be. Resolute 14:36, 29 October 2010 (UTC)Reply
          • Well, the attempt here is to focus wider that uBLPs. The argument for focusing on uBLPs is not that they are an intrinsic danger. The danger comes from BLPs that have not been checked, and underscrutinised BLPs where future edits are less likely to be checked. The argument for dealing with uBLPs is (and I ask you to understand the logic even if you don't agree) that the fact that they are unreferenced is indicative that there's a higher than average chance that they've not been checked, and that they remain under-scrutinised. Referencing them solves the first problem (it checks them), but it does nothing to make them more scrutinised going further to new additions. However, let's leave that argument to one side. There are certainly other ways of identifying under-scrutinised BLPs - I've suggested some metrics above. "Bad word" lists are not a bad tool (the bots use these already) but they do tend to catch more obvious stuff, which isn't the sum total (for from it) of the problem. Additions like "was involved in the Kennedy assassination", "lost his job when" and "was investigated by the policy for" are not going to show up in any such list. However, there's no magic bullet here - what we need is to target our efforts where we agree the risk is greatest and see what we can agree on.--Scott Mac 15:18, 29 October 2010 (UTC)Reply
            • I'd set either semi prot or to some large group of low-traffic low-notability BLPs regardless of keywords. One can have a perfectly innocuous small article on someone and the next thing someone has added god-knows-waht to it and I don't think we can predict that often. Hence the idea of somehow defining this targetted flagging (or semi) group - could be (a) number of watchers (b) number of incoming links? Would this be a good way of sifting out less notable ones? Say all with less than 20 incoming? That'd be an interesting segment to see what it picked up. Hence Any other ideas on how to define this group for a bot? Casliber (talk · contribs) 19:22, 29 October 2010 (UTC)Reply

Targeted flagging vs SP on low-traffic BLPS (pros and cons)

edit

Feel free to add to this.

Targeted Flagging:

edit
(see WP:TF)

Pros

  1. All edits scrutinised - not just IPs and new accounts
  2. IPs can immediately contribute

Cons

  1. Uncertain technical software complications with implementation
  2. Labour intensive in reviewer's time
  3. Potential for low quality reviews
  4. Potential to delay good edits for regular contributors

Semi-protection

edit

Pros

  1. Easy to implement - a metric and a bot and we're off
  2. No need for reviews

Cons

  1. IPs can't contribute without jumping some hoops
  2. Does nothing for scrutiny of accounts over 3 days - a very small hurdle for a motivated character-assassin

Comment on your high risk matrix

edit

I'm not sure if "Long-term maintenance warning tags (unreferenced, debatable notability, COI, neutrality?)" is much of a issue. I think the poorly linked, poorly categorised, poorly wikiproject allocated will get the least views... an article can be "hidden" out there, unlinked, uncategorised and no-one will ever "drive by" it and check. Other than the sneaky vandal who in a moment of huggle downtime snuck in his negative comments. If it has on a long-term maintenance tag, then I reckon some day, someone will decide to work on the backlog and will see it.

I would love to see the Wolterbot project based cleanup tag listings return, but maybe my faith in wikiprojects is skewed by the Australian one being fairly active and organised. Plenty of other WPs haven't touched their UBLP lists, and even had a go at me for notifying them that the list exists!

The other thing is stopping new editors from creating articles and being able to remove their ability to create (apparently you can't take away someone's ability to create, but leave their ability to edit). As good as BLPPROD is at stopping new unreferenced articles, it still takes up time to detect, tag, prod and then either reference or delete, when the article should not have been let in the first place. Make editors edit for a week, 20 edits whatever. I'm sick of seeing a first contribution being an unreferenced, poorly written BLP on their favourite XYZ. Of course they can get around any delay hurdle, which is where we need to have a 3 strikes and you can't create rule to stop repeat offenders.

Interesting idea, glad you've moved away from UBLPS R EV1L and must be destroyed and started talking about the real issues, which occur in all BLPs, sourced or not. The-Pope (talk) 16:36, 29 October 2010 (UTC)Reply

I've repeatedly said that UBLPs are not the real problem. They are, however, low-hanging fruit. The real problem is BLPS that a) have never been checked and b) are unlikely to be monitored. The point is that UBLPS are likely to fall into both those categories. If we insist that someone must reference them or we delete them, then the result is that either they are removed, or someone checks them in order to reference them. That, however, doesn't solve the problem, as an article that's festered unloved for two years, even if it now gets referenced and checks out, is very unlikely monitored going forward, so if someone inserts "bad stuff" it's unlikely to be noticed. Thus unreferenced BLPs are not evil because they are unreferenced, but because it is indicative of "unchecked and under monitored". I'm the first to admin that many referenced articles are "unchecked and under-monitored" (especially if the referencing was provided by the sole-creator. The same would be true of your metric "poorly linked and poorly categorised" these things would indicate a high likelihood of an unloved article that's likely to be "unchecked and under monitored" - however, placing a category on it and a couple of links to it won't change the fact that it is of low-interest and edits to it will be under-scrutinised.--Scott Mac 18:24, 29 October 2010 (UTC)Reply

My favoured matrix for identifying "unchecked and under monitored" articles would be number of page watchers. It isn't infallible either, but I suspect it is better.--Scott Mac 18:26, 29 October 2010 (UTC)Reply

See my above post for an alternative. Casliber (talk · contribs) 19:23, 29 October 2010 (UTC)Reply
In the spirit of seeking common ground for agreement, The-Pope, would you agree that long-term maintenance warning tags are problematic? Certainly not as bad, in some sense, as articles that are bad in some way but untagged and unnoticed, but still bad? And that ideally we should have systematic efforts to find and tag those which are untagged?
I think a big part of what Scott is asking us to reconsider is statements like this: "If it has on a long-term maintenance tag, then I reckon some day, someone will decide to work on the backlog and will see it." No doubt what you say is true, but I think Scott would argue (and I know that I would) that this is not good enough. We need to think about how to make it easier and more "top of mind" for people to work through those and to monitor the results in the long term. (Because, as you say, merely fixing a bad biography once, and then not monitoring it, isn't particularly helpful.)--Jimbo Wales (talk) 06:34, 31 October 2010 (UTC)Reply
Of course I think long-term maintenance tags are a problem. I'm dismayed that what was a fantastic, albeit too slow to be updated, resource, the User:B._Wolterding/Cleanup_listings stopped in March because one user left and no one else has been able to reinvigorate the bot. Those clean-up listings, whilst overwhelming to many, at least highlighted which articles in your project were listed in the many cleanup cats. If I can ask one favour of you, Jimbo, it's to facilitate the revival of this bot. Somewhere, in one of the RFCs or other pages related to the UBLP issue I made the point - this is the encyclopedia that anyone can edit... but why do we have to organise and supervise the maintenance aspects too.
Throughout this whole UBLP issue I've been the one pushing for a "WikiProjects will save the day" approach. I've looked at the 20, 30 or 50,000 article backlogs and thought, you must be able to break it down into manageable portions. "If you organise them, they will be done". And we have broken down the list into manageable portions - at least in terms of UBLPs. Maybe my vision is skewed by mainly being associated with very active projects, Australia (and the WA and AFL subprojects) and Cricket. If every article belonged to a project and the project was active enough, and was notified of the problem articles, then they can be fixed. Because other than destroying part of the encylopedia with bulk deletions, the only way these articles can be "saved" or "fixed" is one at a time.
To me trying to reference articles that were tagged in a certain month, or are in topics or regions that I have no idea about, let alone know where to find references, isn't attractive to me at all. To the guys at WP:URBLPR, that's their preferred MO. We need all types. We've just rewritten the page at WP:URBLP to make it clearer how to help clear the backlog, realising that there are many ways to skin the cat. Am I wrong in thinking WikiProjects can save the day, if they are given the right information and support? The-Pope (talk) 07:20, 31 October 2010 (UTC)Reply
In which case the yes/no question for this segment right here is "Is the unreferenced BLP backlog being cleared at a satisfactory rate?" Now that might be better discussed at Wikipedia:Administrators' noticeboard/Unsourced biographies of living persons....actually maybe combining the discussion here with there (or there here...), to try and keep everything in one place?? Casliber (talk · contribs) 07:20, 31 October 2010 (UTC)Reply
I guess this page is about the bigger picture than just UBLPs, but BLPs in totality, but it's hard to separate my thinking. I've basically repeated everything i've said here, there anyway... and Jimbo hasn't turned up there yet! The-Pope (talk) 07:44, 31 October 2010 (UTC)Reply

Require a Source for new articles

edit

Cleaning up old article is time consuming and imposing a tighter rules on them is difficult and contentious because many if not most were valid when they were created. But earlier this year we were able to partially close one stable door when we tightened the rules to require at least some source of source on new BLPs. Most people accept that this has largely worked, though with over 100 articles in that ten day process at any one time we clearly haven't yet succeeded at reeducationg all our new and infrequent editors.

Changing the article creation process so that the system automatically prompts authors for their source should be possible, I believe it is done on DE wiki and I doubt this would be particularly controversial. Once that technology is in place and has been working smoothly for a few months you could tighten the article creation rules to require a source, I'd like to see the rules change to require new articles to have a reliable source, but I know that would be contentious, and if we were doing this with a view to reducing risk then I think we'd have to concede that a University Bio or similar would be enough to reassure us that an article is safe and neither an attack, a hoax or NPOV - though it might not be notable or POV. Note I'm not suggesting that we only do this for BLPs - there are far too many problems in other articles to leave the other 80% of the pedia open. ϢereSpielChequers 16:54, 29 October 2010 (UTC)Reply


High risk articles

edit

I'm with the Pope on this, if anything it is the untagged articles that are more risky than the tagged ones as we can usually assume that a tag means someone has at least partially read the article. I use Botlaf to trawl through mainspace looking for high risk phrases such as "punched him", OK the vast majority are either sourced or innocuous, but when I find things that need to be blanked they are often if not usually in untagged articles. There are nearly 2,000 articles in mainspace with the word incest, over 400 with Mafiosi and over 7,000 containing the word mafia. Using Botlaf to screen them would be quite practical and fairly efficient. No policy change would be needed, though we would need to find a bot operator who codes in Python as Olaf Davies is retiring. You'd also need some volunteers to go through the reports, as currently I'm the only one using Botlaf. But this approach does work, I've already culled an awful lot of really problematic material with remarkably little dwamah. ϢereSpielChequers 17:26, 29 October 2010 (UTC)Reply

Good work. I've been doing this work for years with a bit of help from Google. The depressing thing is how easy it is to find dreadful BLP violations in just a few minutes. We really do need to find better ways of making sure such things are strangled at birth. That's what I hope we can do here. There's no magic bullet - but "let's all try harder" isn't really going to up scale either.--Scott Mac 18:29, 29 October 2010 (UTC)Reply

If you're using a bot that searches for certain phrases I have a framework that can easily do that and I'm sure there are many others. Coding a search bot that writes its results to userspace is a trivial matter. Getting such a bot approved is easy because there are no mainspace risks. Tasty monster (=TS ) 18:52, 29 October 2010 (UTC)Reply

Separating out pathways

edit

Just to be clear, we need to set up the discussion to define the individual processes proposed. So for mine there is the retrospective one of UBLP clearance (either continuing as is or Uncle G's proposal), and the prospective one(s) - of targetted flagging or targetted semi'ing (of a swathe of lower notability/traffic articles). Is it enough to define and focus on these?

My other idea was to do with the wikicup, which unlike other writing competitions held over the years on WP appears to have some traction. Operating on the carrots vs sticks approach - we've been discussing bonus modifiers to get contributors to focus on vital articles etc. I'd also propose BLPs there, which should result in a morale-boosting increase in some which are more ship-shape...Casliber (talk · contribs) 19:32, 29 October 2010 (UTC)Reply

I've tried to get the wikicup involved in uBLPs but so far had no success, the latest thread is here. I think that giving an extra point to any uBLP taken to GA or DYK would fit in with the Wikicup ethos and be of some benefit to the uBLP project.
There are several big pushes that we could do as part of the uBLP cleanup process. We had a big success in January when Tim1357 sent a bot message to 16,000 authors of uBLPs, he's done another batch since and we could do another message to the authors of the 23,000 - perhaps with a plea from Jimbo to do a cleanup by the tenth anniversary.
The main approach has been via the projects, and as long as we don't blank the articles I think we can continue to get progress here. Again a message from Jimbo to the individual projects would make a big difference. More controversially we could bot message the individual project members, I bet there are loads of editors who've signed up to various projects that they are not currently active in - obviously that sort of spam would need a bot request.
We also have a fairly sizeable opportunity using the interwiki links. The death anomaly project has been very successful and as a byproduct has resolved scores of uBLPs, but if we can get someone to write it, the same sort of approach could be used to generate lists of uBLPs and the referenced article on another wiki. When we floated the idea a few months back I had several volunteers willing to use such reports, but we need a bot writer (also we should remember that many other language versions of Wikipedia seem rather less concerned about referencing, so I'd be surprised if this alone did 10% of the problem). On the plus side uBLPs with an interwiki link to a referenced article on another language wikipedia may be more notable than the average uBLP.
As I've mentioned elsewhere the stickyprod process exists, and some of the >5,000 uBLPs tagged since March are probably also created since March... I think user:Epbr123 has picked up the baton on that one as the number of articles in Category:BLP articles proposed for deletion has jumped from its usual 110-140 range to 208 in the last couple of days, and looking at the dates he is going through June and July whilst the Pope and others are dealing with this month.
But the big opportunity to get consensus is to target areas of high risk rather than medium, risk areas like tagged uBLPs. ϢereSpielChequers 00:00, 30 October 2010 (UTC)Reply
Referencing the articles is good. Not because referencing them is good, but because referencing involves checking content. I don't source articles myself, but I've spent days on that category using various metrics to seek and remove offending material. I stubbed a few (and deleted a few) even tonight. However, I am more interested in the bigger question - even if we check through all these articles (and if they are all sourced then we've done that). That only means that they are "safe" at the time they were checked. We've still got them (and heaven knowns how many tens of thousands more) that are referenced but there's not much ongoing scrutiny. That's what I want to focus on. You can't dent that problem with a cup, or a short-term drive (not that I've any objections to either of those - go for it), but I'm more interested in seeing if there would be any chance of an agreement of semi-protection or flagging of the most unloved articles. I am actually surprised that a lot of people who are not BLP hawks warm to the idea.--Scott Mac 00:11, 30 October 2010 (UTC)Reply
Exactly - but this is a retrospective task which hopefully makes the job of checking easier in 5-10 years. Think about it, in 2006 we were in a much worse off position to remove unsourced material due to the extreme paucity of inline references. Now they are increasing rapidly. DYK checkers regularly ask for (and get) eligible articles largely sourced before they appear on the main page. All this is doing is encouraging people to source content and source vulnerable content. Casliber (talk · contribs) 03:13, 30 October 2010 (UTC)Reply
I've semi protected dozens of low profile articles that had received multiple BLP violations, and I'd happily see the whole site implement flagged revisions either as DE wiki did or in some other form. There are lots of things that could be done to restrict new vandalism from getting into the site, and providing they are effective, efficient and don't bite legitimate editors I'm keen that we implement them. Remember this isn't a debate between people who are hawks and doves on the issue of BLPs, this is a debate between people who see deleting old unreferenced BLPs as the priority and those who think that being a hawk on BLP issues means prioritising other issues where the proven potential for BLP improvement/finding BLP violations is greater. ϢereSpielChequers 09:36, 30 October 2010 (UTC)Reply
I'd rather it wasn't a debate over ublps at all. We can debate that elsewhere. I still believe in deleting them, and you don't. What I am at is to say, "let's talk about other things too", because whatever we end up doing with ublps, there may be other places we can get agreement.--Scott Mac 10:09, 30 October 2010 (UTC)Reply
Yes, I think that is a good course of action. Casliber (talk · contribs) 08:43, 31 October 2010 (UTC)Reply

BLPs are not the problem

edit

It was pointed out a good while back, but ignored--because admitting it greatly increased the scope of the problem--that damaging error about living people and many other things also is present as much or more in the non-BLP articles as in the explicitly BLP ones. All of Wikipedia needs to be reviewed on a regular basis. The entire direction of concentrating further on BLPs is useless Our concern must be with all articles. I therefore see no basis for doing anything along the lines suggested for BLPs in particular, until a problem can be demonstrated.

I therefore take two approaches here. One is of trying to diminish the harm that the apparently inevitable concentration on BLPs will produce. The other is solving the problem. For the first part, I agree completely with the direction both Cas and Scott are taking, that concentration on uBLPs are not the answer. I also agree with the principle of trying to balance the most efficiency with the least harm--this factors do need to be considered together. And I also agree with the general feeling that delayed implementation of edits is always preferable to semi-protection except for dealing with short periods of real vandalism or otherwise unmanageable controversy on a very few hot topics. The trial showed flagging was not practical for heavily edited articles; I understand the developers are working on ameliorating the edit conflict problems involved, but for some articles it will always be a limitation.

But I recall that the trial also showed that there were too few edits to the unwatched least edited articles to be worth any special attention to them. The dispersion of effort over articles on this basis is too low in efficiency to be worth considering. Any use of targeted flagging will require something much more subtle. I suggest a general principle known to apply in other fields of activity: the best measure of an article being improperly edited is if it has been previously. (We know this already--it is the basis on we now use semi-protection). I could see a start by applying flagged editing for a short period automatically after a certain number of problems, and longer with increasing numbers, as we do in blocking. What the numerical values should be needs discussion. I'd be perfectly willing to accept the opinions of those who think that BLPs are the major problem by using smaller numbers of triggering events to such articles. As for practical application, we could make use of the edit filters to count not just successful improper edits, but attempts at them.

As for what I consider the actual problem:
The principle of many eyes will find most errors, but there is no way an unregulated process can possible find them systematically. What's really needed is some formal quality control and reviewing of everything here, on a regular and continuing basis. I do not know how this can be done within the principles of a collaborative project of the sort that Wikipedia is. I do not know how we will even agree on standards. The only data we have, the ongoing review of the politics material, shows that many users are willing to accept as adequately referenced even articles that have no references at all.

Our entire article creation & editing process will -- by its inherent nature -- be amateurish and unpredictable. Since there is no possibility of having top-down quality control, the only alternative is having yet more eyes, that is, greater participation. The only way to have greater participation is to have a high rate of conversion of casual readers to actual editors. On the basis of both formal surveys and my own informal discussions with readers at a very wide range of sophistication this requires two things: first, a much more obvious editing interface for the 95% of the world who are not willing to work with code, and second, much greater encouragement of people coming here for the first time and either fixing or starting articles. Anyone who discourage an editor harms both the possibility of filling in the enormous gaps and of correcting the existing material. The entire direction of concentrating on flagging tends to do this, and I therefore totally reject any approach along the lines suggested. We already are much too discouraging, and we should be moving in the opposite direction. In particular: we must do something about the way we notify people who have made unacceptable edits or articles in order not just to stop the bad work, but to persuade them to contribute good work. My first thought is to remove all the warning templates, but I can;t see how to totally avoid them--most patrollers are very unlikely to write a personal effective comment. In patrolling, I try to rescue potentially good editors who are already upset by the unfriendly attention they receive, but I am considering taking a break from this to rewriting the entire set of templates to make them half the length and twice as friendly. This is probably gthe way I personally can make the most effective contribution. DGG ( talk ) 00:07, 30 October 2010 (UTC)Reply

Thanks for your honesty. I'm not going to argue against your ideas of removing warning templates, or finding a more user-friendly interface, because whatever I might think of this, it isn't going to happen. It is as idealistic as my dream of deleting half the least notable BLPs. We are stuck with the fact that the participation in this project is on a downward curve, and yet is maintaining an increasing number of articles - and using the same tools to do that. Whatever agreements we might or might not find on this page, we are not going to be able to change those background facts.
Of course, the problem is not with BLPs per se, it is (as you say) with material on whatever article that may unfairly reflect on any living person. However, the fact is (or at least was when I used to do OTRS) that a disproportionate number of legitimate complaints about prejudicial material concerned BLPs. That (and I'd welcome a current OTRS op commenting) was jus the way things were.
Just to confirm here: There are 13 tickets in info-en-q at the moment (which is down about 150 from last week, so I'm not sure how much is a result of the few people who were going through the backlog just not wanting to deal with these ones). Every single ticket refers to either a biography of a living or recently dead person. NW (Talk) 05:18, 30 October 2010 (UTC)Reply
If we do anything, we need to target it where it's most effective. If we are trying to improve scrutiny, we need to target it where scrutiny is weakest. You've suggested "the trial also showed that there were too few edits to the unwatched least edited articles to be worth any special attention to them". I don't follow that. If there are "few edits" then that should be easier to work with - since they are capable to review - and if we slow them we don't do so much damage as if we do it to "many edits". We get lots of bad edits to George Bush, but there's easily enough knowledgeable scrutiny to deal with them. That's why I oppose semi-protection of high-profile articles. It doesn't help, and there's no chance of us damaging reputations. If the vandal slayers get annoyed, they can unwatch, because there's no lack of other eyes.
Your other comment was that we might target articles with a "certain number of problems". That's worth exploring. But what do we mean by problems? If we simply mean "bad or defamatory edits" then my issue is that you are back targeting George Bush and Sarah Palin which easily get the highest number of these. I'd say that's the wrong way round. The problem is not "bad edits" it is "bad edits that don't get removed" - so these are the least vulnerable articles. They are also the articles where there's the highest cost in terms of users restricted. I suppose if we said "let's semi-protect any BLP which has had violating material not removed after x hours", we might have something. But then, the articles where the bad stuff isn't spotted would never become those which we targeted for having a "certain number of problems", because we would never spot the problems (or take too long in doing so). There's a bit of door and horse haveing bolted here. But let's see what others think.--Scott Mac 00:35, 30 October 2010 (UTC)Reply
I must say I agree with alot of what DGG is saying - I am worried alot about the driving of of new editors, and am concerned about new editor uptake. Maybe look for some data on this? However, I agree there are areas of greater vulnerability. Luckily, my other area of interest (medical articles) has a bunch of deidtcated doctors watching, as well as pretty easily checkable sourcing (PMID etc.) :) Casliber (talk · contribs) 03:15, 30 October 2010 (UTC)Reply
Again it is down to cost vs benefit. No one is denying there's not a cost to almost any new initiative to reduce BLP risk. And few are denying that at least some of these inititives have benefit. But how how to assess the size of the cost or size of the benefit in each case? And (and this is a moral question, I suppose) even if we could assess size, how do you rate the value of the cost to the value of the benefit, given that the cost is to the project and the benefit alternatively to third parties. Whatever our views here, we all have to face that same dilemma. I guess one question is: where do new editors start? What type of articles attract them (or statistically more so)? And what type of articles have the highest risk of unspotted "bad stuff". If we can find any differential here, we can target in a way that has lower cost and higher benefit, and may thus meet a threshold for the majority of wikipedians.
Please don't think I don't care about this. I've been one of the most vociferous opponents of the growing use of semi-protection on high profile articles. I'm opposed to this because of the metric above. Semi-protection well-scrutinised articles is at the cost of discouraging new editors, and has zero benefit to the subjects. The benefits are all to ourselves in terms of avoiding embarrassment of obvious vandalism being seen, and the work of continually reverting. It seems to me morally better to ask volunteers to pay a price in annoyance (they can walk away for a bit) than to ask third parties. It seems all wrong that George Bush has been indef semi-protected since June, and yet Joe Bloggs is wide open to libels and no one is watching.--Scott Mac 08:57, 30 October 2010 (UTC)Reply
One of the philosophical divides we have to contend with here is between those who see this as a perimeter problem and those who see it as a process/community health problem. The perimeter argument as I understand it is that the problem is related to the number of articles on Wikipedia, ditch x% of the articles and you reduce the size of the problem by something approaching x% and you can concentrate all your resources on a shortened perimeter. The process/community health argument is that the problem is related to the number of edits and how efficiently they are checked, whilst the resources to check them are linked to the size of the pedia and the health of the community. Proponents of the latter view are likely to fear that ditching x% of articles would reduce the size of the community faster than it reduced the flow of vandalism, so increasing the amount of vandalism that each remaining member of the community needs to cleanup. Note the two approaches are not mirror images, as far as I'm aware those who think of this as a perimeter problem may at worse see process and community health based approaches as irrelevant, but they are unlikely to see that approach as harmful. Whilst from a community health perspective a perimeter based solution risks making things worse by reducing the resource of volunteers as fast or possibly faster than you reduce the flow of vandalism. I suspect DGG approaches this from the process/community health perspective, and that those who favour deleting x or y group of articles tend to the perimeter mentality. Personally I'm close to one extreme on this divide, but I'm aware of both views and think that to get consensus for any change you need to consider that change from both perspectives. I think that the easy wins here are going to be the ones that don't need consensus to implement, or that can be sold to editors from both perspectives. ϢereSpielChequers 11:02, 30 October 2010 (UTC)Reply
Everyone (I hope) considers both - after that it is a question of weight and ymmv. However, let's zone in on the middle. It is irrelevant that I might favour deleting 100,000 articles, because it isn't going to happen. It is irrelevant that DGG favours banning warning templates, because that isn't going to happen either. Magic bullets from either extreme don't exist. So, if any improvement is going to be agreed it is going to be (for the moment) relatively small and cautious, with a fairly low cost to the project. It is going to start by looking something like taking the BLPs which are on no watch lists at all and semi-protecting them. Let's leave the extremes of idealist aside, ignore any idea which (whether you like it or not) has obviously not chance of gaining consensus, and see what can be done.--Scott Mac 11:12, 30 October 2010 (UTC)Reply
OK, providing we can avoid publicly identifying unwatched BLPs I would support a policy change to semiprotect them. But I think it would be easier and more effective to increase the numbers of vandalised BLPs where we apply semi-protection. For example if we could get a listing of BLPs where Rollback has been applied 3 times in the last month, I suspect a high proportion would qualify for semiprotection under current policy. Also if we can get some volunteers to expand their watchlists we could give them lists of unwatched BLPs, I'm not keen to drive that myself because I'm trying to keep my watchlist below 12,000, but I see it as a partial solution. ϢereSpielChequers 16:06, 30 October 2010 (UTC)Reply
Aye. I think there's two possible things. a) Affording some protection to the underwatched (however defigned). b) Affording protection to articles where we have identified previous problems. I'd oppose using a rollback metric here, because it would tend to identify high-profile articles with losts of incoming, and quickly identified, vandalism. We'd end up with every top politician and celebrity locked down. At that point I strongly object on the grounds DGG has outlined above. Too high a cost - and too little benefit to subjects. However, if we define "articles known to be vulnerable because of past problems" as "articles where previously serious and unequivocal BLP violations were not reverted within reasonable time (say 48 hours)" and invite admins, or OTRS ops, to indef semi-protect any such article. These are articles where our existing quality control has self-evidently failed, and so we need to offer the subject more protection going forward.--Scott Mac 16:27, 30 October 2010 (UTC)Reply
OK how about we change my rollback metric to "BLPs where Rollback has been applied 3 times in the last month, and total edits are less than twenty in the last month"? That, or some other cap on total edits, would avoid the high profile ones that you want to avoid. Alternatively "Any BLP where Rollback has been applied 48 hours after the previous edit" would in my view get a batch of articles where many would qualify for semi-protection. As for inviting people to make more use of semi-protection, why not submit an opinion piece in the the Signpost? I've done this a couple of times in the last few months, one article kickstarted the death anomalies project and the other flushed out several RFA candidates. ϢereSpielChequers 17:40, 30 October 2010 (UTC)Reply

Waiting period for autoconfirm is too short

edit

This doesn't directly deal with the pages the info is on, but rather with those who are editing the pages, and looking at the behaviour of the editor.

One of the things that keeps coming up is the sense of how overly easy it is to get autoconfirmed.

I think (and partly based upon an rfc concerning this) that we should change the 4 days/10 edits to 7 days/20 edits.

The "urge" to edit in a way that may be deemed inappropriate (vandalism, etc.) can be cooled at least some.

I've found in dealing with such people, that once a person has to wait past their weekend (whatever days that might be), the urge is often gone and forgotten - typically replaced with other impulsive urges. And many editors have a 5 day school week or work week.

So simply increasing the number of days requirement from 4 to 7 might do wonders, and wouldn't affect those positive editors overly much (especially since admins can now give "autoconfimed" out ahead of time at their discretion.

And this would help directly deal with the question about the usefulness/effectiveness of semi-protection might be on low traffic/watched pages. - jc37 05:17, 30 October 2010 (UTC)Reply

I think it should be an AND requirement, not an OR. Make them wait AND contribute in the meantime by editing first. And ensure the 20 edits (count in mainspace only) aren't reverted or deleted. Yes it may slow down the new page creation... but is that a problem?The-Pope (talk) 06:34, 30 October 2010 (UTC)Reply
I agree with the idea of raising the threshold for autoconfirmation. Wikipedia talk:User access levels/RFC on autoconfirmed status required to create an article closed in January this year so you could reraise it, but I think I know what the result would be. Increasing the threshold or making it and not or or only measuring mainspace edits would in my view have a better chance of success. ϢereSpielChequers 10:00, 30 October 2010 (UTC)Reply

Do we think this is really a problem? I find when I semi- things, they stay pretty calm and the petty vandalism is largely fixed. This may vary with other people's experiences (and it sounds like it does). Casliber (talk · contribs) 23:59, 30 October 2010 (UTC)Reply

It generally does work. But sometimes you have a slow-burning SPA editor. These guys tend to work out how to game semi. However, I'm not sure raising the threshold really helps with that - it is just one of the issues semi doesn't help with. Flagging does, but flagging has other issues.--Scott Mac 00:11, 31 October 2010 (UTC)Reply
I mean, how often is "sometimes"? No method is foolproof, we just need a variety of processes to make some decent headway. If it is pretty rare, then maybe keeping autoconfirmed to the current short period is better to faicilitate editing by new users (? - one of the issues about user-friendliness raised by DGG above). Casliber (talk · contribs) 01:35, 31 October 2010 (UTC)Reply
I'm not arguing for extending it. All I'm saying is it isn't a magic bullet and determined character assassins are not always fly-by-nights. I'd oppose extending it because I'm not convinced it would appreciatively more good with a 7 day threshold than a 4 day. Semi protection will do what semi-protection can do, and extending it won't make too much difference.--Scott Mac 01:50, 31 October 2010 (UTC)Reply

Okay, just clarifying. I was actually open-minded on this but tend to agree with you that there is negligible value in extending it. Casliber (talk · contribs) 03:29, 31 October 2010 (UTC)Reply

Another (unworkable?) suggestion

edit

Can we tag all these high-risk BLPs and do the equivalent of new-page patrol on them? Rather than using flagged revisions, with all the issues associated with that, can we just have a mechanism where established and trusted editors and indicate that they have reviewed the changes? I've no idea how many folks would be interested in doing those reviews nor do I know if there is a solid mechanism for noting you've checked the edit (I haven't done NPP). But would that work? The partroller could then request a semi for some time period if vandalism was reoccuring. We could even lower the bar for giving that protection to high-risk BLPs (those with few watchers etc.) Thoughts? Hobit (talk) 10:00, 31 October 2010 (UTC)Reply

Can you spell out how this would work? I think we're on the same page with targeting underwatched BLPs. I'm not sure that semi-protection "for a time" would help. The nature of these things is that we get very few edits to each, over very long periods of time. I think we either need to look at mechanism which mean each edit is scrutinised (flagging is the usual way, but maybe you're on to something else) or long-term pre-emptive semi-protection. I'm also wondering whether we can find out the numbers involved.--Scott Mac 15:24, 31 October 2010 (UTC)Reply
Design a bot that watches unwatched articles and then makes a list of articles that have been edited for a human to check later. A dozen eyes on a single list could become the equivalent of a dozen eyes on a thousand articles. Resolute 14:22, 1 November 2010 (UTC)Reply
The 'Related changes' link sort of does something like this - it's like having dynamic watchsts, but not many editors know about it. The problem is that it would be fairly unworkable at the moment - doing a related changes on the UBLP or BLP sources cats would be very overwhelming. You can do it on any page of links or cat, so project article lists, cleanup cats etc can all become "bookmarkable watchlists"The-Pope (talk) 15:00, 1 November 2010 (UTC)Reply
I'm thinking beyond just BLPs though. And besides, I just checked related changes on Vincent Lecavalier, and got a pile of hits from NHL team articles - none of which are BLPs, nor are they unsourced or unwatched. I think a bot monitoring changes would allow us to better focus what we target our efforts on. My expectation is that completely unwatched articles (not just BLPs) will receive a relatively small number of edits, and so a bot generated list would not be that hard to maintain. This speculation, however, would have to be verified by someone who can answer Scott's questions on the number of unwatched and lightly watched articles. Resolute 15:29, 1 November 2010 (UTC)Reply
I guess the benefit is that you can make up the lists as you please, rather than just on existing articles. It effectively is the "multiple watchlists" that some people want, but few people know about. Here are some examples, some I use regularly, others I made up just as examples:
They are good things to have bookmarked and check every now and then when your own watchlist is quiet. For the "low watchlist" articles, could a bot/database report make a list that only admins could view, and then admins be encouraged to use the related changes function on that list to keep an eye on them? The risk of making the database report public is well known.The-Pope (talk) 15:36, 1 November 2010 (UTC)Reply
I have to admit, I was not really aware of that function. I'll have to look into applications for that. As to a bot creating that list, if we have it continually update changed articles, then we could allow non-admins to view it as well, since more eyes means more coverage. Resolute 16:11, 1 November 2010 (UTC)Reply

Those are excellent examples (although notice that unlike real watchlists they don't include changes on the talk page, which are significant in the case of BLPs). I would like to compile a list of the intersection of Special:UnwatchedPages and Category:Living people, which would also include the relevant talk pages. By looking at related changes to these pages we would pick up edits that would otherwise be all but invisible. For an example of how this works see the link marked "Related changes" on this old revision of my user page]. The master list is in User:Tony Sidaway/Articles under climate change probation (no longer being updated as I'm taking a self-imposed break from that topic).

Having public watchlists like this, adequately curated and regularly visited, greatly magnifies our effectiveness as a community.

Unfortunately the UnwatchedPages function is only accessible to admins, so I'll have to get a tech-savvy admin to run the job to create such a page. I have some candidates in mind so I'll get back to you on that.

Another thing I'd like to encourage is for all editors interested in BLPs to publish their personal watchlists. There are instructios on how to this at Wikipedia:Syndication#Watchlist_feed_with_token. Here is a link to my public watchlist. If for any reason you have to stop using Wikipedia regularly, others will be able to look at changes on your watchlist if you've had the forethought to put a link to your personal watchlist RSS feed on your user page.

I don't at all agree that unwatched and low-watchlist functions should be for admins only. If we all had access to such lists it would greatly improve scrutiny on the relevant changes. --TS 16:15, 1 November 2010 (UTC)Reply

If we broaden the unwatched list beyond admins how would we prevent vandals finding out about them? ϢereSpielChequers 17:03, 12 November 2010 (UTC)Reply
That's where a bot tracking changes to those articles comes into play. As long as we have a list of what to check, the vandals won't be able to hide in the "darkened alleys" of the 'pedia. Resolute 20:12, 12 November 2010 (UTC)Reply

Statistical questions

edit

Is there any way to find out? (Please add any questions, or any answers you think might help.)

  1. How many BLPs are on NO watchlists?
  2. How many BLPs are on less than 3 or 5 watchlists?
  3. How many edits (per week) there are to BLPs on no watchlists?
  4. How many edits (per week) there are to BLPs on less than 3 or 5?
  5. The percentage of edits to such BLPs that are performed by anons?

Answers to statistical questions

edit

I suppose the questions are predicated on how we identify BLPs. I suggest that a suitably written bot can trawl the article list to find articles that are not yet tagged with Category:Living people but whose names are suggestive of an articla about a person. From the volume of those we will then know how to proceed. I suspect that most of our biographical articles are about living persons rather than historical figures--there is a very limited supply of historical figures but a seemingly inexhaustible supply of living people that people want to write articles about. Of the remainder we can probably turf out most of the deceased by selecting on words like "died" or on phrases suggesting birth years more than a century or so ago. But the details of what to do would depend very much on the results. Has anybody done something like this before? --TS 21:31, 31 October 2010 (UTC)Reply

There's bound to be many blps not in the category. However, I'd be content to base the statistics on those that are. --Scott Mac 21:45, 31 October 2010 (UTC)Reply
Agreed that an estimate on existing contents of the category would be better than nothing. It is important in the medium term to complete the task of locating all biographies of living people, though, for the purpose of tracking their evolution accurately and compiling accurate statistics that would enable us to concentrate on the whole problem. I may do something in this field in the near future if time permits. Tasty monster (=TS ) 23:11, 31 October 2010 (UTC)Reply
If you do such a bot run you might like to know that a big part of the annoyance that the various RFCs stirred up earlier this year was because of inaccurate bot tagging, and I'd suggest being careful not to repeat that. For starters many in category:fictional characters have names that are "suggestive of articles about a person", and you might consider excluding them along with the dead. We currently have half a million articles categorised as Living people, though I suspect that includes some errors and lots of dead people. I'm sure there will be thousands, possibly tens of thousands of the other 2.9 million articles that are BLPs that we haven't yet categorised, but yet another exercise that tagged articles like Australian State Coach as a BLP would be contentious. By contrast going through Category:Uncategorized pages categorising the articles that looked like they might be people real would be useful and uncontentious. As someone who has encountered rock groups, fictional characters and even an Anglo Saxon King amongst the articles tagged as unreferenced BLPs I would urge you to be accurate in whatever trawl you do. ϢereSpielChequers 14:29, 11 November 2010 (UTC)Reply
User:Epbr123 seems to be very accurate in identifying both untagged living people and untagged UBLPs. Other than WP:AWB, I don't know what method he uses to find them, but any time that the UBLPs to go number jumps, I see that he's been busy. He seems to do it in two passes.. the living people cat first, and then a few days or weeks later, the UBLP tag. The-Pope (talk) 15:09, 11 November 2010 (UTC)Reply

Lists of articles about living people

edit

I grabbed a snapshot of articles in Category: Living people and am in the process of splitting them into groups of 2000 articles. Here's the beginning of a conglomeration of these articles. See what you think.

The "related changes" links can be used to track recent changes in the articles, so if you click a single link you'll see what's going on.

Grouping the articles 2000 at a time may not really be the best thing to do, because there are about half a million articles so that's still about 250 articles lists. I'll experiment with larger list sizes. Getting down below 100 lists should be enough.

Meanwhile this gives a feel for how we could organize the division of labor.

One good thing about this kind of list is that it's very, very easy to update automatically. I could set up a bot to refresh the contents of all the lists at weekly intervals. This should be just a few lines of code with a good bot framework. --TS 01:04, 3 November 2010 (UTC)Reply

Another thing that can be done is listing the articles by year and month or last edit. It would be a very low-cost operation for a regular editor to adopt a few hundred of the most rarely edited articles and put them onto his watchlist. In this way hidden vandalism might more quickly be detected. --TS 01:36, 3 November 2010 (UTC)Reply

Okay, a bit more coding and here's the final thing: every single BLP in Category:Living people, arranged in 100 tranches of 5000 articles with a handy "related changes" link to click on each.

So from that page you can look at every single edit to a BLP on Wikipedia in recent days. Of course you can do the same with one link here, but that's just an overwhelming number of edits for one person to look at. --TS 06:26, 3 November 2010 (UTC)Reply