Undoubtedly, the Essjay scandal caused significant bad press and loss of goodwill for wikipedia. Further, it does undermine the percieved reliability that wikipedia has garnered. There have been a number of propositions to address this issue, most of which deal with some form of user identification.

Is verification necessary? edit

Without any form of verification, we run the risk of a second Essjay scandal. The question is, is that acceptable?

On the one hand, the negative press that wikipedia has garnered is obviously a black eye, and something that none of us want repeated. On the other, the world press is notoriously fickle and short-memoried, and reactionary policy may do us more long-term harm than good, if it becomes too restrictive.

I am of the belief that even if nothing is done, six months, if not six weeks, from now, the major effects of this event will have died down, and reaction to wikipedia will have somewhat "regressed to the mean."

Problems with open identification edit

Unfortunately, if not handled properly, I think that this has the potential to cause an even worse scenario than the "Essjay scandal." The internet is a big place, and it is filled with all types; this includes people who, if given the necessary information, are not above stalking editors or administrators with whom they have tangled. It has happened to cyber personas enough here—I know of at least one administrator that had to change his or her (vague on purpose) wiki ID, and give up admin privs, because their identity was being vilified, pilloried, and ridiculed in various blogs and websites, for no other reason than that the people runing those sites did not like some decisions made by that admin. We all know about Wikipedia Review as well.

Cyberstalking has turned into real-life stalking in places such as myspace; if we require public identification of editors, we run the risk of having a real-life stalking basedon a wikipedia incident, whose fallout would make the Essjay scandal look like a walk in the park.

Problems with need for verification edit

Another potential permanent change to wiki society that verification may cause is the ad verecundiam issue. Once we have a system for verifying credentials, I am afraid that non-credentialed people will have their edits summarily reverted with edit summaries of the form "I'm an expert and you are not." While experts often are our best resource, we only need to look at some of our more contentious articles to see that allowing experts an advantage may not always be the best policy.

Suggestion edit

However, there is no doubt that having some measures in place to help prevent an embarrassment to the project similar to this incident seems to be desired by many. As such I think that the most effective, and safest, way, is to use a structure based on a Web of Trust, similar to PGP or Thawte.

While having to reveal your identity anywhere public is something that I feel is more harm than good, I think that most of us do have one or two wiki editors that we trust, and whom we would feel more comfortable revealing our identities. If person A is vouched for by someone that you trust, then you would have good reason to trust person A yourself. The more a person is vouched for, the more comfortable you can be with their statements about themselves.

Further, similar to Thawte, we may wish to set up certain "notaries". These would be people, such as Jimbo himself, OFFICE staff, perhaps ArbCom, who are trusted by Wikipedia, its board, and a large subset of its users, whose "verification" stamp would carry implicit trust by the community. These people themselves would have to be people of both utmost discretion, as well as high moral standards, to be entrusted by all of us with some details as to our identity. The information could, nay should, be deleted after verification so that it could never be used to harm an editor. These few people themselves may need to reveal their identity in the spirit of Quis custodiet ipsos custodes, or perhaps revealing their identity to the Board is sufficient. These people may also have the abilty to "revoke" trust of users found to have maliciously and intentionally violating the trust of wikipedia.

Each person could have a user subpage with the signatures of those who vouch for them, similar to signing a PGP key.

A method of this form allows us to choose to whom we would reveal our identities/credentials, those whom we trust, and over a span of time, such sets of "trusted" users would merge and create a "strong set" of trust among users.

Is this procedure open to abuse? Of course, like any other, but with the majority of wikipedians well-meaning, I think this is a safer, yet effective way to enhance wikipedia's credibility without sacrificing safety.