Archives for posts with tag: strategy

I know a lot of people who’re starting up new nonprofits, and most don’t have any prior experience with fundraising. That was me!, back in 2007 when I took over the Wikimedia Foundation. And so, the purpose of this post is to share some of what I learned over the past eight years, both from my own experience and from talking with other EDs and with grantmakers. I’m focusing on restricted grants here because they’re the most obvious and common funding source for nonprofits, especially in their early stages of development.

Restricted grants can be great. Grantmaking institutions fund work that’s socially important, that’s coming out of organizations that may have no other access to funding, and that is often risky or experimental. They take chances on people and organizations with good ideas, who may not yet have a track record. That’s necessary and important.

But restricted grants also pose some specific problems for the organizations seeking them. This is well understood inside nonprofitland, but isn’t immediately obvious to people who’re new to it.

Here are the five main problems with restricted grants.

Restricted grants can be administratively burdensome. At the WMF, we actively sought out restricted grants for about two years, and afterwards accepted them only rarely. We had two rules of thumb: 1) We would only seek restricted grants from organizations we knew well and trusted to be good partners with us, and 2) We would only seek restricted grants from organizations that were roughly our size (by staff headcount) or smaller. Why? Because restricted grants can be a lot of work, particularly if the two organizations aren’t well aligned.

Big institutions have a big capacity to generate process: forms to fill out, procedures to follow, hoops to jump through. They have lots of staff time for meetings and calls and email exchanges. They operate at a slower pace than smaller orgs, and their processes are often inflexible. People who work at grantmaking institutions have a responsibility to be careful with their organization’s money, and want to feel like they’re adding value to the work the nonprofit is doing. Too often, this results in nonprofits feeling burdened by expensive process as they procure and report on grants: time that you want to spend achieving your mission, instead risks getting eaten up by grantmakers’ administrative requirements.

Restricted grants risk overwriting the nonprofit’s priorities with the grantmakers’ priorities. At the WMF, we didn’t accept grants for things we weren’t planning to do anyway. Every year we developed our plan, and then we would (sometimes, with funders we trusted) seek funding for specific components of it. With funders we trusted, we were happy to get their input on our priorities and our plans for executing them. But we weren’t interested in advancing grantmakers’ goals, except insofar as they overlapped with ours.

Too often, especially with young or small non-profits, I see the opposite.

If an organization is cash-strapped, all money looks good. But it’s not. Here’s a crude example. Let’s say the WMF knows it needs to focus its energy on mobile, and a funder is interested in creating physical spaces for Wikipedians to get together F2F for editing parties. In that context, agreeing with a funder to take money for the set-up of editing cafes would pose a distraction from the mobile work the WMF would need to be doing. An organization’s capacity and energies are always limited, and even grants that fully fund a new activity are necessarily drawing on executive and managerial attention, as well as the organization’s support functions (human resources, accounting, admin, legal, PR). If what a restricted grant funds isn’t a near-perfect fit with what the organization hopes to accomplish regardless of the funding, you risk your organization getting pulled off-track.

Restricted grants pull focus from core work. Most grantmakers want their money to accomplish something new. They’re inclined to see their grants as seed money, funding experiments and new activity. Most successful nonprofits though, have important core work that needs to get done. At the WMF for example that core work was the maintenance and continued availability of Wikipedia, the website, which meant stuff like hosting costs, costs of the Ops team, site security work and performance optimization, and lawyers to defend against censorship.

Because restricted grants are often aimed at funding new activity, nonprofits that depend on them are incentivized to continually launch new activities, and to abandon or only weakly support the ones that already exist. They develop a bias towards fragmentation, churn and divergence, at the expense of focus and excellence. An organization that funds itself solely or mainly through restricted grants risks starving its core.

Restricted grants pull the attention of the executive director. I am constantly recommending this excellent article by the nonprofit strategy consultancy Bridgespan, published in the Stanford Social Innovation Review. Its point is that the most effective and fastest-growing nonprofits focus their fundraising efforts on a single type of funder (e.g., crowdfunding, or foundations, or major donors). That’s counter-intuitive because most people reflexively assume that diversification=good: stable, low-risk, prudent. Those people, though, are wrong. What works for e.g. retirement savings, is not the same as what works for nonprofit revenue strategy.

Why? Because organizations need to focus: they can’t be good at everything, and that’s as true when it comes to fundraising as it is with everything else. It’s also true for the executive director. An executive director whose organization is dependent on restricted grants will find him or herself focused on grantmaking institutions, which generally means attending conferences, serving on juries and publicly positioning him or herself as a thought leader in the space in which they work. That’s not necessarily the best use of the ED’s time.

Restricted grants are typically more waterfall than agile. Here’s how grants typically work. The nonprofit writes up a proposal that presumes it understands what it wants to do and how it will do it. It includes a goal statement, a scope statement, usually some kind of theory of change, a set of deliverables, a budget, timeline, and measures of success. There is some back-and-forth with the funder, which may take a few weeks or many months, and once the proposal is approved, funding is released. By the time the project starts, it may be as much as an entire year since it was first conceived. As the plan is executed the organization will learn new things, and it’s often not clear how what’s been learned can or should affect the plan, or who has the ability to make or approve changes to it.

This is how we used to do software development and in a worst-case scenario it led to death march projects building products that nobody ended up wanting. That’s why we shifted from waterfall to agile: because you get a better, more-wanted product, faster and cheaper. It probably makes sense for grantmaking institutions to adapt their processes similarly, but I’m not aware of any who have yet done that. I don’t think it would be easy, or obvious, how to do it.

Upshot: If you’re a new nonprofit considering funding yourself via restricted grants, here’s my advice. Pick your funders carefully. Focus on ones whose goals have a large overlap with your own, and whose processes seem lightweight and smart. Aim to work with people who are willing to trust you, and who are careful with your time. Don’t look to foundations to set your priorities: figure out what you want to do, and then try to find a grantmaker who wants to support it.

This post requires a number of caveats and acknowledgements. They’re at the bottom.

In 2008 I was interviewing a candidate for an engineering position at the Wikimedia Foundation and as we talked I found myself imagining what a terrific impression he would make on donors. He’s so shiny and cheerful and mission-oriented, I found myself thinking — donors will love him!

As soon as I thought it, I had the grace to be embarrassed. And although we ended up hiring the guy, we did it because he seemed like a talented engineer, not because he was charming. I was horrified at myself for a while afterwards anyway, and the whole thing ended up being a bit of a turning point for me, as well as a cautionary story I sometimes tell. Because that was the moment that crystalized for me what’s *actually* wrong with nonprofits.

Preface! I’ve always been irritated by people who assume nonprofitland is self-evidently suckier than forprofitland. I’m particularly irritated by people who say that nonprofits “should be more businesslike,” with businesslike as a kind of confused stand-in for “better.” That just seems dumb to me — I feel like it’s obvious that nonprofits function in a specific context including challenges unique to the sector, and that solutions aimed at increasing our effectiveness needed to be designed to respond specifically to those actual, real circumstances. That’s what this post is about: my goal is to describe a serious problem, and point to where I believe we’re beginning to see solutions emerge.

Here it is.

Every nonprofit has two main jobs: you need to do your core work, and you need to make the money to pay for it. In the for-profit sector when you make better products, you make more money — if you make awesome socks, you sell lots of socks. Paying attention to revenue makes sense in part because revenue functions as a signal for the overall effectiveness of the org: if sales drop, that’s a signal your product may be starting to suck, or that something else is wrong.

Nonprofits also prioritize revenue. But for most it doesn’t actually serve as much of an indicator of overall effectiveness. That’s because donors rarely experience the core mission work first-hand — most people who donate to Médecins Sans Frontières, for example, have never lived in a war zone. That means that most, or often all, the actual experiences a donor has with a nonprofit are related to fundraising, which means that over time many nonprofits have learned that the donating process needs –in and of itself– to provide a satisfying experience for the donor. All sorts of energy is therefore dedicated towards making it exactly that: donors get glossy newsletters of thanks, there are gala dinners, they are elaborately consulted on a variety of issues, and so forth.

By contrast, when I buy socks I do not get a gala dinner. In fact it’s the opposite: the more that sockmakers focus relentlessly and obsessively on sock-making awesomeness, the likelier I am to buy their socks in future. This means that inside most of nonprofitland –and unique to nonprofitland– there’s a structural problem of needing to provide positive experiences for donors that is disconnected from the core work of the organization. This has a variety of unintended effects, all of which undermine effectiveness.

It starts with the ED.

EDs prioritize revenue because a fundamental job of any CEO is to ensure their organization has the money it needs to achieve its goals. That means fundraising is necessarily the top priority for a nonprofit ED. That’s why the head of fundraising normally reports to the ED, and it’s why, I’d say from my observation and reading, the average ED probably dedicates about 70% of his or her energy to fundraising.

Optimizing for fundraising distorts how the ED behaves. To the extent EDs optimize themselves for fundraising, they tend to spend time outside their organization — being interviewed, attending conferences, publicly demonstrating wisdom and thought leadership. An ED must hone his or her self-presentation and diplomatic abilities, even at the expense of other attributes such decisiveness or single-mindedness, because that’s what donors see and respond to. There’s an obvious opportunity cost as well: spending 70% of your time on fundraising leaves only 30% for everything else. (That’s why, in a different context, Paul Graham argues that start-ups should have only one person designated to handle fundraising: to preserve the bulk of organizational resources for other stuff.)

The second effect: Optimizing for donor experience promotes a general emphasis on appearances rather than realities. Appearing effective rises in importance relative to being effective.

Here’s how the mature nonprofits I know self-present. Everyone is very polite and the offices are quiet. Their reception areas display racks of carefully-designed marketing materials. One I know has gorgeous brushed stainless steel signs attached to its conference room doors, engraved with an exhortation to be silent in the hallways. Typically the staff dress like academics — the women wear interesting jewelry, with the men in shabby suit jackets and corduroys.

By contrast I noticed in my early days running the WMF, we were quite different. Our staff were young and messy and wore hoodies. They were smart and blunt, sometimes obnoxiously so. The office was often half-deserted because everybody worked all the time, often while travelling or from bed. I’m pretty sure at one point we had a foosball table in the middle of the room, and later there was a karaoke set-up and a Galaga game. What if donors think we’re erratic, undisciplined slobs, I found myself worrying. What if they’ve never met programmers before?

Most nonprofits, it seemed to me, optimized to self-present as competent, sober, and diligent. I think if they optimized to get stuff done, they might look different.

The third effect. Nonprofits are generally conservative in their approach to regulatory compliance, administration, finance and governance practices. (Why? Partly it’s because the core work is complicated: hard to do and hard to measure, so people drift towards stuff that’s simpler. Also, the nonprofit sector is too small to support a diverse array of service providers, and so the services provided by consultants tend to be extremely generic. Boilerplate recommendations on term limits and that kind of thing.) Optimizing for donor experience makes that worse.

Why? It’s easy to describe for donors the core problem a nonprofit is trying to solve, but explaining the work of solving it –and how impact can best be measured– is hard. Far easier to show that the 990 was filed on time, that the org got a clean audit letter, and that the ED’s compensation was determined according to a highly responsible process. And donors seem relatively willing to accept the proposition that administrative effectiveness is a good proxy for overall organizational impact, even though such a proposition is actually pretty weak. A whole industry has developed around this: supporting good compliance and measuring it, as a service for potential donors.

This effect is amplified by the presence of major donors, who are typically wealthy retired business executives.

That’s because major donors like to feel their advice is as useful as their money, and they have decades of experience of people taking their opinions seriously. But they can’t necessarily say much that’s useful about the specifics of helping victims of domestic violence or rehabilitating criminals or protecting endangered gorillas in the Congo. So many nonprofits create opportunities where they can help. They are put on the investment committee, they are asked to help with the audit firm selection process, their advice is sought about when to launch an endowment campaign. This has the effect of focusing the ED’s attention in those areas — because the ED, of course, wants to make sure the major donor’s experience with the org is a positive one. More unintended consequences: “providing a good donor experience” becomes an unstated job requirement for the head of finance. A great head of nonprofit finance needs to not just be a person who’s financially and administratively competent: he or she also needs to be credible, composed, tactful and likable.

So. A major structural flaw of many nonprofits is that their revenue is decoupled from mission work, which pushes them to focus on providing a positive donor experience often at the expense of doing their core work. That’s bad.

What can we do about it?

I believe the problem is to some degree newly now solvable. I know that, because we solved it at the Wikimedia Foundation.

Here’s what we did.

From 2008 until late 2009, the WMF played around with various fundraising models. We applied for and got restricted grants, we cultivated major donors, we made business deals that brought in what’s called in nonprofitland “earned income,” and we fundraised online using what we grew to call the many-small-donors model. After two years we determined we’d be able to be successful using any of those methods, and an important study from Bridgespan had persuaded us to pick one. And so we picked many-small-donors, because we felt like it was the revenue model that best aligned with our core mission work.

Today, the WMF makes about 95% of its money from the many-small-donors model — ordinary people from all over the world, giving an average of $25 each.

It’s awesome.

We don’t give board seats in exchange for cash. Foundations’ priorities don’t override our own. We don’t stage fancy donor parties (well, we do stage one a year, but it’s not very fancy), and people who donated lots of money have no more influence than people who donate small amounts — and, importantly, no more influence than Wikipedia editors. Donors very rarely visit the office, and when they do, they don’t get a special dog-and-pony show. I spend practically zero time fundraising. We at the WMF get to focus on our core work of supporting and developing Wikipedia, and when donors talk with us we want to hear what they say, because they are Wikipedia readers. (That matters. I remember in the early days spending time with major donor prospects who didn’t actually use Wikipedia, and their opinions were, unsurprisingly, not very helpful.)

The many-small-donors models wouldn’t work for everyone, mainly because for it to succeed your core work needs to be a product or service that large numbers of people are aware of, understand, and want to support. About a half-a-billion people read Wikipedia, and we get on average 11 cents a year from each one, which is not much. I know a couple of nonprofits that’ve backed away from the many-small-donors model after doing that math. But I think the usefulness of the many-small-donors model, ultimately, will extend far beyond the small number of nonprofits currently funded by it.

Why? People are slowly getting used to the idea of voluntarily giving smallish amounts of money online to support stuff they like — look at Kickstarter and Donors Choose and Indiegogo. These are not self-interested transactions made after a careful evaluation of ‘what’s in it for me’: they’re people funding stuff because they think it’s great. Meanwhile, the online payment processing market is maturing, with an increasing number of providers supporting an increasing number of currencies and countries, and fees are starting to drop. And, note that donations to the WMF have risen steadily every single year (we’ve been named the nonprofit with the fastest growing revenues in the United States, which probably actually means in the world) — even though the WMF’s fundraising is deliberately restrained. Eleven cents per user per year is nowhere near a ceiling, for Wikipedia or for anyone.

The advent of the internet has given ordinary people access to the means of production, and now they (we) can easily share information with each other on sites like Wikipedia. That’s been playing out for more than a decade, and its effects have included the disintermediation of gatekeepers and middlemen of all types. I think we’re now seeing the same thing happen, more slowly, with the funding of mission-driven work. I think that among other things, we’re going to see the role of foundations and major donors change in surprising ways. And I think the implications of these changes go beyond fundraising itself. For organizations that can cover their costs with the many-small-donors model I believe there’s the potential to heal the disconnect between fundraising and core mission work, in a way that supports nonprofits being, overall, much more effective.

Notes: This post is written from the vantage point of somebody who thinks many nonprofits do good work in difficult circumstances: please read it from that perspective. Lots of people think nonprofits are lazy and inefficient and woolly-minded. That’s sometimes true, but no more so in my experience than at for-profit orgs. The world has no shortage of suck.

I also want to thank some of the people who’ve influenced my thoughts in this area. Although the views expressed here are my own, Erik Moeller and I have talked a ton about this stuff over the past half-dozen years. He was the first person to point out to me the absurdity of overheard ratios, and has written about them extensively and publicly, starting back in 2009. Afterwards, he and I discovered the good work of Dan Pallotta and also the Urban Institute, investigating overhead ratios and explaining why they’re bunk. I’ve also benefited from reading Jim Collins’s monograph Good to Great and the Social Sectors, as well as two books from Michael Edwards: Just Another Emperor? The Myths and Realities of Philanthrocapitalism, and Small Change: Why Business Won’t Save the World. I was helped by a conversation about difficulties facing new nonprofits a few years back at the Aspen Institute, as well as by dozens of less structured conversations with fundraisers including particularly Zack Exley, as well as with my fellow EDs, including ones on whose boards I serve. David Schoonover has done some analysis of U.S. non-profit funding models that has influenced me, and he and I have talked extensively about challenges facing the nonprofit sector, including this one. The folks at Omidyar have also been helpful, including pointing me towards the very useful Bridgespan study linked above.

Because I’ve been working lately on issues related to grantmaking and Wikimedia movement entities, it might be tempting to assume my arguments here are somehow aimed at informing or influencing those conversations. They’re not. To the extent anything here is useful to those conversations that’s great, but that’s not why I wrote this.

The Wikimedia Foundation Board of Trustees met in San Francisco a few weeks ago, and had a long and serious discussion about controversial content in the Wikimedia projects. (Why? Because we’re the only major site that doesn’t treat controversial material –e.g., sexually-explicit imagery, violent imagery, culturally offensive imagery– differently from everything else. The Board wanted –in effect– to probe into whether that was helping or hurting our effectiveness at fulfilling our mission.)

Out of that agenda item, we found ourselves talking about what it looks like when change is handled well at Wikimedia, what good leadership looks like in our context, and what patterns we can see in work that’s been done to date.

I found that fascinating, so I’ve done some further thinking since the meeting. The purpose of this post is to document some good patterns of leadership and change-making that I’ve observed at Wikimedia.

Couple of quick caveats: For this post, I’ve picked three little case studies of successful change at Wikimedia. I’m defining successful change here as ‘change that stuck’ – not as ‘change that led to a desirable outcome.’ (I think all these three outcomes were good, but that’s moot for the purposes of this. What I’m aiming to do here is extract patterns of effective process.) Please note also that I picked these examples quickly without a criteria set – my goal was just to pick a few examples I’m familiar with, and could therefore easily analyze. It’s the patterns that matter, not so much the examples.

That said: here are three case studies of successful change at Wikimedia.

  • The Board’s statement on biographies of living people. Policies regarding biographies had been a topic of concern among experienced Wikipedians for years, mainly because there is real potential for people to be damaged when the Wikipedia article about them is biased, vandalized or inaccurate, and because our experience shows us that articles about non-famous people are particularly vulnerable to skew or error, because they aren’t read and edited by enough people. And, that potential for damage –particularly to the non-famous– grows along with Wikipedia’s popularity. In April 2009, the Board of Trustees held a discussion about BLPs, and then issued a statement which essentially reflected best practices that had been developed by the Wikipedia community, and recommended their consistent adoption.  The Board statement was taken seriously: it’s been translated into 18 languages, discussed internally throughout the editing community, and has been cited and used as policies and practices evolve.

  • The strategy project of 2009-10. Almost 10 years after Wikipedia was founded, the Board and I felt like it was time to stop down and assess: what are we doing well, and where do we want to focus our efforts going forward. So in spring 2009, the Wikimedia Board of Trustees asked me to launch a collaborative, transparent, participatory strategy development project, designed to create a five-year plan for the Wikimedia movement. Over the next year, more than 1,000 people participated in the project, in more than 50 languages. The resultant plan is housed on the strategy wiki here, and a summary version will be published this winter. You can never really tell the quality of strategy until it’s implemented (and sometimes not even then), but the project itself has accomplished what it set out to do.

  • The license migration of May 2009. When I joined Wikimedia this process was already underway, so I only observed first-hand the last half of it. But it was lovely to watch. Essentially: some very smart and experienced people in leadership positions at Wikimedia decided it made sense to switch from the GFDL to CC-BY-SA. But, they didn’t themselves have the moral or legal right to make the switch – it needed to be made by the writers of the Wikimedia projects, who had originally released their work under the GFDL. So, the people who wanted the switch launched a long campaign to 1) negotiate a license migration process that Richard Stallman (creator of the GFDL and a hero of the free software movement) would be able to support, and 2) explain to the Wikimedia community why they thought the license migration made sense. Then, the Wikimedia board endorsed the migration, and held a referendum. It passed with very little opposition, and the switch was made.

Here are nine patterns I think we can extract from those examples:

  1. The person/people leading the change didn’t wait for it to happen naturally – they stepped up and took responsibility for making it happen. The strategy project grew out of a conversation between then-board Chair Michael Snow and me, because we felt that Wikimedia needed a coherent plan. The BLP statement was started by me and the Board, because we were worried that as Wikipedia grew more popular, consistent policy in this area was essential. The license migration was started by Jimmy Wales, Erik Moeller and others because they wanted it to be much easier for people to reuse Wikimedia content. In all these instances, someone identified a change they thought should be made, and designed and executed a process aimed at creating that change.
  2. A single person didn’t make the change themselves. A group of people worked together to make it happen. More than a thousand people worked on the strategy project. Probably hundreds have contributed (over several years) to tightening up BLP policies and practices. I’m guessing dozens of people contributed to the license migration. The lesson here is that in our context, lasting change can’t be produced by a single person.
  3. Early in the process, somebody put serious energy towards achieving a global/meta understanding of the issue, from many different perspectives. It might be worth pointing out that this is not something we normally do: in order to do amazing work, Random Editor X doesn’t have any need to understand the global whole; he or she can work quietly, excellently, pretty much alone. But in order to make change that involves multiple constituencies, the person doing it needs to understand the perspectives of everyone implicated by that change.
  4. The process was carefully designed to ask the right people the right questions at the right time. The license migration was an exemplar here: The people designing the process quite rightly understood that there was no point in asking editors’ opinions about something many of them probably didn’t understand. On the other hand, the change couldn’t be made without the approval of editors. So, an education campaign was designed that gave editors access to information about the proposed migration from multiple sources and perspectives, prior to the vote.
  5. A person or a group of people dedicated lots of hours towards figuring out what should happen, and making it happen. In each case here, lots of people did lots of real work: researching, synthesizing, analyzing, facilitating, imagining, anticipating, planning, communicating.
  6. The work was done mostly in public and was made as visible as possible, in an attempt to bolster trust and understanding among non-participants. This is fundamental. We knew for example that the strategy project couldn’t succeed if it happened behind closed doors. Again and again throughout the process, Eugene Eric Kim resisted people’s attempts to move the work to private spaces, because he knew it was critical for acceptance that the work be observable.
  7. Some discussion happened in private, inside a small group of people who trust each other and can work easily together. That’s uncomfortable to say, because transparency and openness are core values for us and anything that contradicts them feels wrong. But it’s true: people need safe spaces to kick around notions and test their own assumptions. I know for example that at the beginning of the Board’s BLP conversations, I had all kinds of ideas about ‘the problem of BLPs’ that turned out to be flat-out wrong. I needed to feel free to air my bad ideas, and get them poked at and refuted by people I could trust, before I could start to make any progress thinking about the issue. Similarly, the Board exchanged more than 300 e-mails about controversial content inside its private mailing list, before it felt comfortable enough to frame the issue up in a resolution that would be published. That private kicking around needs to happen so that people can test and accelerate and evolve their own thinking.
  8. People put their own credibility on the line, endorsing the change and trying to persuade others to believe in it. In a decentralized movement, there’s a strong gravitational pull towards the status quo, and whenever anyone tries to make change, they’re in effect saying to hundreds or thousands of people “Hey! Look over here! Something needs to happen, and I know what it is.” That’s a risky thing to do, because they might be perceived in a bunch of negative ways – as naiive or overreacting, as wrong or stupid or presumptuous, or even as insincere – pretending to want to help, but really motivated by inappropriate personal self-interest. Putting yourself on the line for something you believe in, in the face of suspicion or apathy, is brave. And it’s critical.
  9. Most people involved –either as participants or observers– wanted more than anything else to advance the Wikimedia mission, and they trusted that the others involved wanted the same thing. This is critical too. I have sometimes despaired at the strength of our default to the status quo: it is very, very hard to get things done in our context. But I am always reassured by the intelligence of Wikimedia community members, and by their dedication to our shared mission. I believe that if everyone’s aligned in wanting to achieve the mission, that’s our essential foundation for making good decisions.

Like I said earlier — these are just examples I’ve seen or been involved in personally. I’d be very interested to hear other examples of successful change at Wikimedia, plus observations & thinking about patterns we can extract from them.

About a week ago, I started running a little survey asking Wikimedians how we should approach target-setting for the next five years.

I did it because next month Wikimedia will finalize the targets that’ll guide our work for the next five years, and I wanted to gather some quick feedback on the thinking that’s been done on that, to date.  The survey’s close to wrapping up now, and the results thus far are terrific: there appears to be good consensus on what we want to measure, as well as on our general approach.

More detail below!  But first, some general background.

In July 2009, the Wikimedia Foundation kicked off a massive strategy development project, which is starting to wrap up now. [1] The one major set of decisions that remains to be finalized is how we will measure progress towards our goals.

The draft goals, measures of success and targets that have been developed via the strategy project are here. They were created over the past several months by Wikimedia community members, Bridgespan staff, and Wikimedia Foundation staff (thank you all) – and in my opinion, they’re pretty good.  They focus on what’s important, and they do a reasonably good job of figuring out how to measure things that don’t always lend themselves to easy measurement.

Before finalizing the targets and taking them to the Wikimedia Board of Trustees for approval, I wanted to gather some additional input, so I hacked together a quick, imperfect little survey.   (You can read it –and fill it out if you want– here.) The purpose of this post is just to share the results — I will probably write more about the targets themselves later.

First some methodology: I made the survey in Google Docs, and sent identical versions to i) the Wikimedia Board, ii) the Wikimedia staff, and iii) the “foundation-l” mailing list (a public list on which anyone can talk about the Wikimedia Foundation and Wikimedia projects), the Wikimedia Foundation Advisory Board list, and the “internal-l” mailing list (a private list intended for Wikimedia chapters representatives and Wikimedia Foundation board and staff).  Then –for the purposes of this post– I aggregated together all three sets of results, which total about 120 individual responses thus far.

If I’d been more serious I’d have used LimeSurvey, which is a better survey tool than Google Docs — but this is really just meant to be a structured solicitation of input, rather than a proper quantitative study.  For one thing, the “community” results reflect only a tiny fraction of active editors — those who read English, who are on Wikimedia’s mailing lists or are connected with people who are, and who self-selected to answer the survey.  So, please resist the temptation to over-interpret whatever numbers I’ve given here.

In general, I was happy to find that the survey surfaced lots of consensus.  A comfortable majority agrees with all of the following:

  • Wikimedia’s goals should be “ambitious but possible.” (Other less-popular options were: “definitely attainable, but not necessarily easily,” “audacious and probably not attainable, but inspiring,” and “fairly easily attainable.”)
  • We agree that the purpose of setting goals is “to create a shared understanding and alignment about what we’re trying to do, publicly and with everyone.” (Other options: “to create an audacious target that everyone can get excited about and rally behind,” and “to create accountability.”)
  • In setting goals, we believe “perfection is the enemy of the good: I would rather see us using imperfect measures than no measures at all.” (About 15% of respondents felt otherwise, believing that “imperfect measures are a waste of time and energy.”)
  • The Wikimedia Foundation’s goals should be dependent on efforts by both the Wikimedia Foundation and the Wikimedia community, not by the Foundation alone. (18% of respondents felt otherwise, that the targets should be “entirely within the control of the Wikimedia Foundation to influence.”)
  • If we exceed our goals, practically everyone will be “thrilled.” (About five percent of respondents felt otherwise, saying that they would be “disappointed: that would tell me our goals weren’t sufficiently challenging.”
  • If we fail to meet our goals, about three quarters of respondents will feel “fine, because goals are meant to aspire/align: if we do good work but don’t meet them, that’s okay.” Interestingly, this is one of the few areas of the survey where there was a real division between the staff of the Wikimedia Foundation and other respondents. Only 17% of staff agreed they’d be okay with missing our targets. I think this is probably good, because it suggests that the staff feel a high sense of personal responsibility for their work.
  • Almost everyone agrees that “goal-setting for the Wikimedia Foundation is difficult. We should set goals now, but many measures and targets will be provisional, and we’ll definitely need to REFINE them over the next five years, possibly radically.” (Runner-up response: “we can set good goals, measures and targets now, and we should NOT need to change them much during the next five years.” And a very small number felt that we should refrain from setting targets for “things we’re still uncertain about,” and instead restrict ourselves to areas that are “straightforward.”)
  • The global unique visitors target is felt by most to be “attainable if the staff and community work together to achieve it.” (About 20% of respondents felt the target might be “even happen without any particular intervention.”)

I wanted to get a sense of what measures people felt were most important. They’re below, in descending order of importance. (The number is the percentage of total respondents who characterized the measure as either “critical” or “important.” Other options were “somewhat important,” “not important,” and “don’t know/not sure.”)

It’s probably worth noting that consensus among community members, the board and the staff was very high.  For more than half the measures, the percentage of respondents rating the measure as “important” or “critical” varied by less than 10% among the different groups, and for the remainder, it varied by less than 20%.

Measure Avg
Retention of active editors 84
Number of active editors 83
Site performance in different geographies 80
Demographics of active editors 80
Uptime of all key services 78
Financial stability 74
Global unique visitors 66
Secure off-site copies 65
Number of articles/objects/resources 65
Regular snapshots/archives 60
Thriving research community 54
Offline reach 53
Reader-submitted quality assessments 41
Expert article assessments 40
Community-originated gadgets/tools/extensions 22

The survey’s still accepting input — if you’re interested you’ve got until roughly 7PM UTC, Wednesday August 18, to fill it out.

————————————————————————————–
[1]

I launched the Wikimedia strategy project at the request of the Wikimedia Foundation Board of Trustees, and it was led by Eugene Eric Kim of Blue Oxen Associates, a consulting firm with a special focus on enabling collaborative process. Eugene worked with Philippe Beaudette, a longtime Wikipedian and online facilitator for the project, and The Bridgespan Group, a non-profit strategy consulting firm that provided data and analysis for us. The premise of the project was that the Wikimedia movement had achieved amazing things (the number five most-used site in the world! 375 million visitors monthly!), and it was now time to reflect on where we were making good progress towards fulfilling the mission, and where we weren’t. With the goal of course-correcting where we weren’t doing well.

To come up with a good plan, we wanted to stay true to our core and central premise: that open, mass collaboration is the most effective method for achieving high-quality decisionmaking. So, we designed the process to be transparent, participatory and collaborative. So, during the course of the project, more than a thousand volunteers worked together in 50+ languages — in teams and as individuals, mostly in public on the strategy wiki, but supplemented by IRC meetings, Skype calls, e-mail exchanges, and face-to-face conversations (e.g., meetings were held in Berlin, Paris, Buenos Aires, San Francisco, Boston and Gdansk).

The project’s now entering its final phase, and you can see the near-final results here on the strategy wiki.  What remains to be done is the finalization of the measures of success, which will happen over the next six or so weeks. At that point, there will be some final wordsmithing, and the result will be brought to the Wikimedia Board of Trustees for approval.

I will probably write about the strategy project at a later date, because it is super-interesting. (Meanwhile, if you’re interested, you can read a little about it here in a story that Noam Cohen wrote from Wikimania 2010 in Gdansk.)