Archives for posts with tag: trust

For the past sixteen months, the Wikimedia Foundation has been having uncomfortable conversations about how we handle controversial imagery in our projects — including, a few weeks ago, the staging of a referendum on an image hiding feature requested by our Board. The purpose of this post is not to talk specifically about the referendum results or the image hiding feature: for that, I’ll be talking in more official venues. The purpose of this post is to step back and assess where we’re at, and to call for a change in tone and emphasis in our discussions.

Please note also that due to the nature of the topic, you may find yourself offended by this post, and/or the materials linked from it.

In March 2010, editors on the German Wikipedia ran a poll asking their colleagues whether they would support a rule restricting the types of material that could appear on the German home page. Thirteen voted in favour of restrictions, and 233 voted against. A few weeks later, the German Wikipedia featured the article about the vulva on its home page, which included a close-up photograph of an open vagina. Twenty-three minutes after the article went up, a reader in Berlin wrote “you can’t be serious?!,” and called for the image to be taken down. This initiated an on-wiki discussion that eventually reached 73,000 words – the length of a shortish novel. It included a straw poll in which 29 people voted to remove the image and 30 voted to keep it. The image was kept, and the article remained on the front page for its full 24 hours.

A few months later, in June, the Wikimedia Foundation Board of Trustees began to discuss how the Wikimedia community was handling controversial imagery. Why? Because some people seemed to be using Commons to stockpile commercial porn; because the German community had put a close-up photo of a vagina on its homepage; and because upskirt photos and controversial editorial cartoons seemed to be being categorized in ways that seemed designed to be provocative, and the people who complained about them were being shot down.

The Wikimedia Foundation was concerned that a kind of market failure might be happening — that the Wikimedia community, which is generally so successful at achieving good decision quality through a consensus process, was for some reason failing to handle the issue of controversial material well. It set out to explore what was going on, and whether we needed to handle controversial imagery differently.

That triggered community members’ fears of censorship and editorial interference. And so we find ourselves today, sixteen months later, locked in angry debate. At a meeting in Nuremberg a few weeks ago, German Wikipedian User:Carbidfischer furiously denounced our Board Chair Ting Chen. The other day –as far as I know for the first time ever– somebody called someone else an asshole on one of our mailing lists. User:Niabot created this parody image. It’s unpleasant and unconstructive, and if you’re familiar with transactional analysis, or with the work done by the Arbinger Institute, you’ll recognize the bad patterns here.

The purpose of this post is to figure out why we aren’t handling this problem well, and how we can get back on track.

So: backing up.

Is there a problem with how the Wikimedia projects handle potentially-objectionable material? I say yes. The problems that led the Board to want to address this issue still exist: they have not been solved.

So what’s the solution? I have read pages upon pages of community discussion about the issue, and I sympathize and agree with much of what’s been said. Wikipedia is not, and should never be, censored. It should not be editorially interfered with.

But refusing censorship doesn’t mean we have no standards. Editors make editorial judgments every day, when we assess notability of topics, reliability of sources, and so forth. The German Wikipedia particularly is known to have extremely rigorous standards.

So why do we refrain from the expression of editorial judgment on this one issue?

I think there are two major reasons.

First, we have a fairly narrow range of views represented in our discussions.

We know that our core community represents just a sliver of society: mainly well-educated young men in wealthy countries, clustered in Europe and North America. It shouldn’t surprise us, therefore, when we skew liberal/libertarian/permissive, especially on issues related to sexuality and religion. Our demographic and attitudinal narrowness is a shame because at the heart of the projects is the belief that many eyes make all bugs shallow and yet, we’re not practicing what we preach. Instead, we’ve become an echo chamber: we hear only voices like our own, expressing points of view we already agree with. People who believe other things fall silent or abandon the conversation or are reduced to impotent rage. Or, and even likelier, they never made it to the table in the first place.

Second, we are confusing editorial judgment with censorship.

Censorship is imposed from outside. Editorial judgment is something we do every day in the projects. Applying editorial judgment to potentially-objectionable material is something that honourable journalists and educators do every day: it is not the same as censorship, nor does it constitute self-censorship.

In newsrooms, editors don’t vote on whether they personally are offended by material they know their readers will find objectionable, and they don’t make decisions based on whether the angry letters outnumber the supportive ones. They exercise empathy, and at their best they are taking a kind of ‘balance of harm’ approach — aiming to maximize benefit and minimize cost. The job is to provide useful information to as many people as possible, and they know that if people flee in disgust, they won’t benefit from anything the newsroom is offering. That doesn’t mean newsrooms publish only material that’s comfortable for their readers: it means they aim to exercise good judgment, and discomfit readers only when –on balance– discomfort is warranted.

How does that apply to us? It’s true that when people go to the article about the penis, they probably expect to see an image of a penis, just like they do when they look it up in a reference book in their public library. It’s also true that they probably wouldn’t benefit much from a gallery of cellphone camera shots of penises, and that’s why we don’t have those galleries on our articles. In lots of areas, we are currently doing a good job.

But not always.

When an editor asks if the image cleavage_(breasts).jpg really belongs in the article about clothing necklines, she shouldn’t get shouted down about prudishness: we should try to find better images that don’t overly sexualize a non-sexual topic. When an editor writes “you can’t be serious?!” after vagina,anus,perineum_(detail).jpg is posted on the front page, the response shouldn’t be WP:NOTCENSORED: we should have a discussion about who visits the homepage, and we should try to understand, and be sensitive to, their expectations and circumstances and needs. When we get thousands of angry e-mails about our decision to republish the Jyllands-Posten Muhammad cartoons, we should acknowledge the offence the cartoons cause, and explain why, on balance, we think they warrant publication anyway. None of that is censorship. It’s just good judgment. It demonstrates transparency, a willingness to be accountable, and a desire to help and serve our readers — and it would earn us trust.

I believe that in our discussions to date, we’ve gotten ourselves derailed by the censorship issue. I know that some people believe that the Wikimedia Foundation is intending to coercively intervene into the projects, in effect overruling the judgment of the editorial community. I don’t see it that way, I regret that others do, and I dislike the ‘authoritarian parent / rebellious adolescent’ dynamic we seem to be having trouble resisting.

Wikipedia is not censored. It should never be censored. That doesn’t relieve us of the obligation to be thoughtful and responsible.

So: what needs to happen?

We need to have a discussion about how to responsibly handle objectionable imagery. That discussion doesn’t need to happen with the Wikimedia Foundation (or at least, not solely with the Wikimedia Foundation). The projects should be talking internally about how to avoid unnecessarily surprising and offending readers, without compromising any of our core values.

Those community members who are acting like provocateurs and agitators need to stop. Demonizing and stereotyping people we disagree with pushes everyone into extremist positions and makes a good outcome much less likely. We need to look for common ground and talk calmly and thoughtfully with each other, staying rooted in our shared purpose. Some editors have been doing that throughout our discussions: I am seriously grateful to those people, and I wish others would follow their example.

“Wikipedia is not censored” is true. And, we need to stop using it as a conversation killer. It’s the beginning of the conversation, not the end of it.

We need to set aside anxieties about who’s in charge, and quit fighting with each other. We need to be aware of who’s not at the table. We need to bring in new voices and new perspectives that are currently lacking, and really listen to them. Those community members who’ve been afraid to talk need to speak up, and those who’ve been driven away need to come back.

The purpose of this post is to call for that responsible engagement.

Like I said at the top of this post, my purpose in writing this is not to talk about the referendum results or the image hiding feature: for that, I’ll be talking in more official venues.

The Wikimedia Foundation Board of Trustees met in San Francisco a few weeks ago, and had a long and serious discussion about controversial content in the Wikimedia projects. (Why? Because we’re the only major site that doesn’t treat controversial material –e.g., sexually-explicit imagery, violent imagery, culturally offensive imagery– differently from everything else. The Board wanted –in effect– to probe into whether that was helping or hurting our effectiveness at fulfilling our mission.)

Out of that agenda item, we found ourselves talking about what it looks like when change is handled well at Wikimedia, what good leadership looks like in our context, and what patterns we can see in work that’s been done to date.

I found that fascinating, so I’ve done some further thinking since the meeting. The purpose of this post is to document some good patterns of leadership and change-making that I’ve observed at Wikimedia.

Couple of quick caveats: For this post, I’ve picked three little case studies of successful change at Wikimedia. I’m defining successful change here as ‘change that stuck’ – not as ‘change that led to a desirable outcome.’ (I think all these three outcomes were good, but that’s moot for the purposes of this. What I’m aiming to do here is extract patterns of effective process.) Please note also that I picked these examples quickly without a criteria set – my goal was just to pick a few examples I’m familiar with, and could therefore easily analyze. It’s the patterns that matter, not so much the examples.

That said: here are three case studies of successful change at Wikimedia.

  • The Board’s statement on biographies of living people. Policies regarding biographies had been a topic of concern among experienced Wikipedians for years, mainly because there is real potential for people to be damaged when the Wikipedia article about them is biased, vandalized or inaccurate, and because our experience shows us that articles about non-famous people are particularly vulnerable to skew or error, because they aren’t read and edited by enough people. And, that potential for damage –particularly to the non-famous– grows along with Wikipedia’s popularity. In April 2009, the Board of Trustees held a discussion about BLPs, and then issued a statement which essentially reflected best practices that had been developed by the Wikipedia community, and recommended their consistent adoption.  The Board statement was taken seriously: it’s been translated into 18 languages, discussed internally throughout the editing community, and has been cited and used as policies and practices evolve.

  • The strategy project of 2009-10. Almost 10 years after Wikipedia was founded, the Board and I felt like it was time to stop down and assess: what are we doing well, and where do we want to focus our efforts going forward. So in spring 2009, the Wikimedia Board of Trustees asked me to launch a collaborative, transparent, participatory strategy development project, designed to create a five-year plan for the Wikimedia movement. Over the next year, more than 1,000 people participated in the project, in more than 50 languages. The resultant plan is housed on the strategy wiki here, and a summary version will be published this winter. You can never really tell the quality of strategy until it’s implemented (and sometimes not even then), but the project itself has accomplished what it set out to do.

  • The license migration of May 2009. When I joined Wikimedia this process was already underway, so I only observed first-hand the last half of it. But it was lovely to watch. Essentially: some very smart and experienced people in leadership positions at Wikimedia decided it made sense to switch from the GFDL to CC-BY-SA. But, they didn’t themselves have the moral or legal right to make the switch – it needed to be made by the writers of the Wikimedia projects, who had originally released their work under the GFDL. So, the people who wanted the switch launched a long campaign to 1) negotiate a license migration process that Richard Stallman (creator of the GFDL and a hero of the free software movement) would be able to support, and 2) explain to the Wikimedia community why they thought the license migration made sense. Then, the Wikimedia board endorsed the migration, and held a referendum. It passed with very little opposition, and the switch was made.

Here are nine patterns I think we can extract from those examples:

  1. The person/people leading the change didn’t wait for it to happen naturally – they stepped up and took responsibility for making it happen. The strategy project grew out of a conversation between then-board Chair Michael Snow and me, because we felt that Wikimedia needed a coherent plan. The BLP statement was started by me and the Board, because we were worried that as Wikipedia grew more popular, consistent policy in this area was essential. The license migration was started by Jimmy Wales, Erik Moeller and others because they wanted it to be much easier for people to reuse Wikimedia content. In all these instances, someone identified a change they thought should be made, and designed and executed a process aimed at creating that change.
  2. A single person didn’t make the change themselves. A group of people worked together to make it happen. More than a thousand people worked on the strategy project. Probably hundreds have contributed (over several years) to tightening up BLP policies and practices. I’m guessing dozens of people contributed to the license migration. The lesson here is that in our context, lasting change can’t be produced by a single person.
  3. Early in the process, somebody put serious energy towards achieving a global/meta understanding of the issue, from many different perspectives. It might be worth pointing out that this is not something we normally do: in order to do amazing work, Random Editor X doesn’t have any need to understand the global whole; he or she can work quietly, excellently, pretty much alone. But in order to make change that involves multiple constituencies, the person doing it needs to understand the perspectives of everyone implicated by that change.
  4. The process was carefully designed to ask the right people the right questions at the right time. The license migration was an exemplar here: The people designing the process quite rightly understood that there was no point in asking editors’ opinions about something many of them probably didn’t understand. On the other hand, the change couldn’t be made without the approval of editors. So, an education campaign was designed that gave editors access to information about the proposed migration from multiple sources and perspectives, prior to the vote.
  5. A person or a group of people dedicated lots of hours towards figuring out what should happen, and making it happen. In each case here, lots of people did lots of real work: researching, synthesizing, analyzing, facilitating, imagining, anticipating, planning, communicating.
  6. The work was done mostly in public and was made as visible as possible, in an attempt to bolster trust and understanding among non-participants. This is fundamental. We knew for example that the strategy project couldn’t succeed if it happened behind closed doors. Again and again throughout the process, Eugene Eric Kim resisted people’s attempts to move the work to private spaces, because he knew it was critical for acceptance that the work be observable.
  7. Some discussion happened in private, inside a small group of people who trust each other and can work easily together. That’s uncomfortable to say, because transparency and openness are core values for us and anything that contradicts them feels wrong. But it’s true: people need safe spaces to kick around notions and test their own assumptions. I know for example that at the beginning of the Board’s BLP conversations, I had all kinds of ideas about ‘the problem of BLPs’ that turned out to be flat-out wrong. I needed to feel free to air my bad ideas, and get them poked at and refuted by people I could trust, before I could start to make any progress thinking about the issue. Similarly, the Board exchanged more than 300 e-mails about controversial content inside its private mailing list, before it felt comfortable enough to frame the issue up in a resolution that would be published. That private kicking around needs to happen so that people can test and accelerate and evolve their own thinking.
  8. People put their own credibility on the line, endorsing the change and trying to persuade others to believe in it. In a decentralized movement, there’s a strong gravitational pull towards the status quo, and whenever anyone tries to make change, they’re in effect saying to hundreds or thousands of people “Hey! Look over here! Something needs to happen, and I know what it is.” That’s a risky thing to do, because they might be perceived in a bunch of negative ways – as naiive or overreacting, as wrong or stupid or presumptuous, or even as insincere – pretending to want to help, but really motivated by inappropriate personal self-interest. Putting yourself on the line for something you believe in, in the face of suspicion or apathy, is brave. And it’s critical.
  9. Most people involved –either as participants or observers– wanted more than anything else to advance the Wikimedia mission, and they trusted that the others involved wanted the same thing. This is critical too. I have sometimes despaired at the strength of our default to the status quo: it is very, very hard to get things done in our context. But I am always reassured by the intelligence of Wikimedia community members, and by their dedication to our shared mission. I believe that if everyone’s aligned in wanting to achieve the mission, that’s our essential foundation for making good decisions.

Like I said earlier — these are just examples I’ve seen or been involved in personally. I’d be very interested to hear other examples of successful change at Wikimedia, plus observations & thinking about patterns we can extract from them.

Tonight I went to see historian Timothy Garton Ash talk with his friend Tobias Wolff at Stanford. The occasion was the publication of Timothy’s newest book, a collection of essays and reportage loosely built around the idea that “facts are subversive.”  Timothy’s premise seems to be –roughly, loosely– that people in power are often trying to construct narratives in support of a particular economic, political or culture agenda, and that facts –even very small ones– can sometimes trip that up.

One thing they talked about was about honesty in memoirs — for example, Mary McCarthy’s 1957 autobiography Memories of a Catholic Girlhood, in which McCarthy disarmingly confesses that “the temptation to invent has been very strong,” and “there are cases when I am not sure myself whether I am making something up.” And about George Orwell’s Homage to Catalonia, in which Orwell wrote:

I have tried to write objectively about the Barcelona fighting, though, obviously, no one can be completely objective on a question of this kind. One is practically obliged to take sides, and it must be clear enough which side I am on. Again, I must inevitably have made mistakes of fact, not only here but in other parts of this narrative. It is very difficult to write accurately about the Spanish war, because of the lack of non-propagandist documents. I warn everyone against my bias, and I warn everyone against my mistakes. Still, I have done my best to be honest.” (1)

This brought into focus for me something I’ve long half-recognized — both in my own experiences of reading Wikipedia, and the stories people tell me about how they use it themselves. Article after article after article on Wikipedia is studded with warnings to the reader. “This article needs references that appear in reliable third-party sources.” “This article needs attention from an expert on the subject.” “This article may be too technical for most readers to understand.”  On this page, you can see 24 common warning notices — and there are many, many more.

And I think that’s one of the reasons people trust Wikipedia, and why some feel such fondness for it. Wikipedia contains mistakes and vandalism: it is sometimes wrong. But people know they can trust it not to be aiming to manipulate them — to sell them something, either a product or a position. Wikipedia is just aiming to tell people the truth, and it’s refreshingly honest about its own limitations.

Tobias Wolff said tonight that sometimes such disclaimers are used manipulatively, as corroborating detail to add versimilitude to text that might otherwise be unpersuasive. I think that’s true. But in the case of Wikipedia, which is written by multitudes, disclaimers are added to pages by honest editors who are trying to help. They may not themselves be able to fix an article, but at the very least, they want to help readers know what they’re getting into. I like that.

(1) I looked that up on Google Books when I got home. Yay, Google Books!

I read a book recently called Authentic Conversations, which is essentially an argument for honesty at work. That may not sound too radical, but it actually is. I’ve managed people for more than a decade, and the book made me think about how much conventional management theory and training is designed to replace authenticity with calculation, and how damaging that can be.

The book opens with a great story, in which the publisher of a struggling newspaper is doing a newsroom walkaround. [1]

It’s tough times for newspapers, so his staff ask him lots of anxious questions — is the company okay, what’s he doing about the advertising slump, has he figured out the pension issue. He talks confidently about how things are going to be okay. There’s a plan. The board is optimistic. And so forth. The authors (who were with the publisher that day, presumably starting a consulting engagement), say he clearly felt he was doing good work – creating an atmosphere of calm and confidence, so that his staff could focus on doing their jobs well.

And I have to say, I have definitely been there. I’ve never lied to anyone who works for me, but I worked for a long time in a troubled industry, and I certainly expressed optimism more strongly than I felt it, many many many times, and for the same reason the poor publisher was probably doing it.

The twist to the story is that back in the guy’s office, the consultants rip into him and tell him what he did was horrible. They persuade him to call a special meeting and tell his staff he’d been lying — that the company is indeed in trouble, and that neither he nor anyone can give them the reassurance they want. And that he’s not their father, and isn’t responsible for their security or for their happiness.

The story ends triumphantly, with applause all around.

I’d be surprised if things actually played out that way, because I don’t think that people necessarily value truth that much. But I do think that even when people don’t want the truth, or aren’t comforted by it, they deserve it.

The authors go on to decry that they call “speaking for effect.” Which again struck me as pretty radical. As a young manager, I was trained –over and over again, explicitly and by modelled behaviour– to carefully manage my words and tone in order to create the response I wanted.

  • A mentor of mine was well-known for using silence to increase his authority. In big meetings, he’d be perfectly watchful, and would say nothing.  Throughout the meeting, the other participants would get more and more nervous, wondering what he was thinking. They’d start second-guessing themselves and poking holes in their own arguments. Eventually they’d start actively soliciting his opinion, and by the time he finally spoke, whatever he said carried enormous weight. [2]
  • Two friends of mine, who were also friends of each other and who ran competitive TV shows inside our organization, used to stage yelling matches in front of their newsroooms in which they’d argue over whose show warranted more resources from the shared services pool (like, edit suite time or PR support). They did it so their teams would feel valued and defended, and afterwards they’d go out for beer.
  • I had a boss who was famous for his terrible temper. He’d shout, hang up on people, send all-caps e-mails, and storm around the office slamming doors and throwing things. But his capacity for anger –and his reputation for it– was mostly calculated — he’d slam down the phone and start laughing.
  • A colleague was proud of her ability to shame her staff into doing better work. Her magic words, she told me, were “I am disappointed in you.” I once watched her role-play a performance assessment, and I found her acting ability pretty remarkable. She’d sigh, put down her pencil, make prolonged eye contact, and say something like “Jim. You’re really letting down the team.”
  • And then there’s a very common use of speaking for effect: the deliberate expression of trivial agreement. This is particularly done, I think, in big, old companies where responsibility is diffused and group buy-in is critical. In those contexts, expressing trivial agreement (“sounds interesting!”, “good point!”, “great feedback!”) is the small coin of the realm. If you do it well, it costs you nothing, wins you allies, and puts money in the favour bank.

I’m not trying to argue that all these tactics are necessarily bad. It’s obviously true that managers need to be in control of their emotions, and need to be conscious of their effects on others. Undisciplined and reckless bosses can cause all kinds of problems.

But I think a little calculation goes a long way. And I also think there’s a cost, which sometimes goes unnoticed.  People who are very studied, whose words and responses are calculated for effect more so than being spontaneous and natural — they’re behaving inauthentically.  To a degree, they’re treating other people as means to an end rather than as human beings, and their behaviour also suggests that their minds are totally made up: they are not actually open to new information, they’ve figured out the correct end state, and the only work that needs to be done is persuading you to go there. Which means they run the risk that the people around them will learn over time to distrust them. It also means they miss the opportunity to engage authentically — to have real conversations, to stretch themselves, to learn.

[1] I don’t have the book with me, so I may have butchered specifics a little. But the gist of the story’s accurate.
[2] Warning to women who might consider trying this tactic: it doesn’t work for women. Truly. I liked a lot of things about that guy, and I tried modelling my own behaviour on his for a while, but it didn’t take long to realize that a silent woman is perceived totally differently from a silent man. Suffice to say that a silent woman is easily mistaken for a person without authority, regardless of how much she may actually have. Too bad :-(