Learning from The Wealth of the Commons


Paper by Mae Shaw in Special issue of the Community Development Journal on “Commons Sense New thinking about an old idea: “We are poised between an old world that no longer works and a new one struggling to be born. Surrounded by centralized hierarchies on the one hand and predatory markets on the other, people around the world are searching for alternatives’.

This is the starting point for what David Bollier and Silke Helfrich, the editors of The Wealth of the Commons: A World Beyond Market and State (2012), describe as ‘an extended global exercise in commoning’ – Peter Linebaugh’s term for ‘the self-determination of commoners in managing their shared resources’ (p. 396). In other words, the book itself is offered as an active process of ‘making the path’ by presenting ‘some of the most promising new paths now being developed’. It is intended to be ‘rigorous enough for academic readers yet accessible enough for the layperson’. In this, it more than achieves its ambitions. The Wealth of the Commons is an edited collection of seventy-three short papers from thirty countries: ‘a collective venture of sharing, collaboration, negotiation and creative production among some of the most diverse commons scholars, activists and projects leaders imaginable’. This rich and diverse source of knowledge and inspiration could be described as ‘polyvocal’ in the sense that it presents a multiplicity of voices improvising around a single theme – sometimes in harmony, sometimes discordant, but always interesting.

The book brings together an impressive collection of contributors from different places, backgrounds and interests to explore the meaning of the commons and to advocate for it ‘as a new paradigm’ for the organization of public and private life. In this sense, it represents a project rather than an analysis: essentially espousing a cause with imperative urgency. This is not necessarily a weakness, but it does raise specific questions about what is included and what is absent or marginalized in this particular selection of accounts, and what might be lost along the way. What counts as ‘commons’ or ‘the commons’ or ‘the common’ (all used in the text) is a subject of discussion and contestation here, as elsewhere. The effort to ‘name and claim’ is an integral aspect of the project. As Jeffrey et al. (2012, p. 10) comment, ‘the struggle for the commons has never been without its own politics of separation and division’, raising valid questions about the prospects for a coherent paradigm at this stage. At the very least, however, this rich resource may prove seminal in countering those dominant paradigms of growth and development in which structural and cultural adjustments ‘serve as a justifying rhetoric for continuity in plunder’ of common resources (Mattei, p. 41).

The contributions fall into three general categories: those offering a critique of existing ‘increasingly dysfunctional’ market/state relations; those that ‘enlarge theoretical understandings of the commons as a way to change the world’; and those that ‘describe innovative working projects which demonstrate the feasibility’ of the commons.

What counts as the commons?

As acknowledged in many of the chapters, defining the commons in any consistent and convincing way can be deeply problematic. Like ‘community’ itself, it can be regarded to some degree as an ideological portmanteau which contains a variety of meanings. Nonetheless, there is a general commitment to confront such difficulties in an open way, and to be as clear as possible about what the commons might represent, what it might replace, and what it should not be confused with. Put most simply, the commons refers to what human beings share in nature and society that should be cherished for all now and for the future: ‘the term … provides the binding element between the natural and the social or cultural worlds’ (Weber p.11). Its profound challenge to the logic of competitive capitalist relations, therefore, is to ‘validate new schemes of human relations, production and governance … commonance’ (Bollier and Helfrich, p. xiv) that penetrate all levels of public and private life. This idea is explored in detail in many of the contributions.

The commons, then, claims to represent a philosophical stance, an intellectual framework, a moral and economic imperative, a set of organizing principles and commitments, a movement, and an emerging ‘global community of practice’ (O’Connell, 2012). It has also developed an increasingly shared discourse, which is designed to unsettle institutionalized norms and values and to reclaim or remake the language of co-operation, fairness and social justice. As the editorial points out, the language of capitalism is one that becomes ‘encoded into the epistemology of our language and internalized by people’. In community development, and elsewhere, we have become sensitized to the way in which progressive language can be appropriated to support individualistic market values. When empowerment can mean facilitated asset-stripping of local communities, and solidarity targets can be set by government (e.g. Scottish Government, 2007), then we must be wary about assuming proprietorial closure on the term ‘commons’ itself.

As Federici, in a particularly persuasive chapter, warns: ‘… capital is learning about the virtues of the common good’ (p. 46). She argues that, ‘since at least the 1990s, the language of the commons has been appropriated … by the World Bank and put at the service of privatization’. For this reason, it is important to think of the commons as a ‘quality of relations, a principle of co-operation and of responsibility to each other and to the earth, the forests, the seas, the animals’ (p. 50). This produces a different operational logic, which is explored in depth across the collection.

Deficiencies in the commons framework

To advance the commons as ‘a new paradigm’, it is necessary to locate it historically and to show the ways in which it has been colonized and compromised, as some of these pieces do. It may seem ironic that the meaning of ‘the commons’ to many people in the UK, for example, is that bear pit of parliamentary business, the House of Commons, in which adversarial rather than consensual politics is the order of the day. Reclaiming such foundational ideas is a lengthy and demanding process, as David Graeber shows in The Democracy Project, his recent account of the Occupy Movement, which for a time commanded considerable international interest. Drawing on Linebaugh, Federici contends that ‘commons have been the thread that has connected the history of the class struggle into our time’.

It is unfortunate, therefore, that the volume fails to address the relationship between organized labour and the commons, as highlighted in the introduction, because there is a distinctive contribution to be made here. As Harvey (2012) argues, decentralization and autonomy are also primary vehicles for reinforcing neoliberal class strategies of social reproduction and producing greater inequality. For example, in urban environments in particular, ‘the better the common qualities a social group creates, the more likely it is to be raided and appropriated by private profit-maximising interests’ leading inexorably to economic cleansing of whole areas. Gentrification and tourism are the clearest examples. The salience of class in general is an underdeveloped line of argument. If this authoritative collection is anything to go by, this may be a significant deficiency in the commons framework.

Without historical continuity – honouring the contribution of those ‘commoners’ who came before in various guises and places – there is a danger of falling into the contemporary trap of regarding ‘innovation’ as a way of separating us from our past. History in the past as well as in the making is as essential a part of our commons as is the present and the future – material, temporal and spiritual….”

New Research Suggests Collaborative Approaches Produce Better Plans


JPER: “In a previous blog post (see, http://goo.gl/pAjyWE), we discussed how many of the most influential articles in the Journal of Planning Education and Research (and in peer publications, like JAPA) over the last two decades have focused on communicative or collaborative planning. Proponents of these approaches, most notably Judith Innes, Patsy Healey, Larry Susskind, and John Forester, developed the idea that the collaborative and communicative structures that planners use impact the quality, legitimacy, and equity of planning outcomes. In practice, communicative theory has led to participatory initiatives, such as those observed in New Orleans (post-Katrina, http://goo.gl/A5J5wk), Chattanooga (to revitalize its downtown and riverfront, http://goo.gl/zlQfKB), and in many other smaller efforts to foment wider involvement in decision making. Collaboration has also impacted regional governance structures, leading to more consensus based forms of decision making, notably CALFED (SF Bay estuary governance, http://goo.gl/EcXx9Q) and transportation planning with Metropolitan Planning Organizations (MPOs)….
Most studies testing the implementation of collaborative planning have been case studies. Previous work by authors such as Innes and Booher, has provided valuable qualitative data about collaboration in planning, but few studies have attempted to empirically test the hypothesis that consensus building and participatory practices lead to better planning outcomes.
Robert Deyle (Florida State) and Ryan Weidenman (Atkins Global) build on previous case study research by surveying officials in involved in developing long-range transportation plans in 88 U.S. MPOs about the process and outcomes of those plans. The study tests the hypothesis that collaborative processes provide better outcomes and enhanced long-term relationships in situations where “many stakeholders with different needs” have “shared interests in common resources or challenges” and where “no actor can meet his/her interests without the cooperation of many others (Innes and Booher 2010, 7; Innes and Gruber 2005, 1985–2186). Current theory posits that consensus-based collaboration requires 1) the presence of all relevant interests, 2) mutual interdependence for goal achievement, and 3) honest and authentic dialog between participants (Innes and Booher 2010, 35–36, Deyle and Weidenmann, 2014).

Figure 2 Deyle and Weidenman (2014)
By surveying planning authorities, the authors found that most of the conditions (See Figure 2, above) posited in collaborative planning literature had statistically significant impacts on planning outcomes.These included perceptions of plan quality, participant satisfaction with the plan, as well as intangible outcomes that benefit both the participants and their ongoing collaboration efforts. However, having a planning process in which all or most decisions were made by consensus did not improve outcomes.  ….
Deyle, Robert E., and Ryan E. Wiedenman. “Collaborative Planning by Metropolitan Planning Organizations A Test of Causal Theory.” Journal of Planning Education and Research (2014): 0739456X14527621.
To access this article FREE until May 31 click the following links: Online, http://goo.gl/GU9inf, PDF, http://goo.gl/jehAf1.”

#BringBackOurGirls: Can Hashtag Activism Spur Social Change?


Nancy Ngo at TechChange: “In our modern times of media cycles fighting for our short attention spans, it is easy to ride the momentum of a highly visible campaign that can quickly fizzle out once another competing story emerges. Since the kidnappings of approximately 300 Nigerian girls by militant Islamist group Boko Haram last month, the international community has embraced the hashtag, “#BringBackOurGirls”, in a very vocal and visible social media campaign demanding action to rescue the Chibok girls. But one month since the mass kidnapping without the rescue of the girls, do we need to take a different approach? Will #BringBackOurGirls be just another campaign we forget about once the next celebrity scandal becomes breaking news?

#BringBackOurGirls goes global starting in Nigeria

Most of the #BringBackOurGirls campaign activity has been highly visible on Twitter, Facebook, and international media outlets. In this fascinating Twitter heat map created using the tool, CartoDB, featured in TIME Magazine, we can see a time-lapsed digital map of how the hashtag, “#BringBackOurGirls” spread globally, starting organically from within Nigeria in mid April.

The #BringBackOurGirls hashtag has been embraced widely by many public figures and has garnered wide support across the world. Michelle Obama, David Cameron, and Malala Yusafzai have posted images with the hashtag, along with celebrities such as Ellen Degeneres, Angelina Jolie, and Dwayne Johnson. To date, nearly 1 million people signed the Change.org petition. Countries including the USA, UK, China, Israel have pledged to join the rescue efforts, and other human rights campaigns have joined the #BringBackOurGirls Twitter momentum, as seen on this Hashtagify map.

Is #BringBackOurGirls repeating the mistakes of #KONY2012?

Kony_2012_Poster_3

A great example of a past campaign where this happened was with the KONY2012 campaign, which brought some albeit short-lived urgency to addressing the child soldiers recruited by Joseph Kony, leader of the Lord’s Resistance Army (LRA). Michael Poffenberger, who worked on that campaign, will join us a guest expert in TC110: Social Media for Social Change online course in June 2013 and compare it the current #BringBackOurGirls campaign. Many have drawn parallels to both campaigns and warned of the false optimism that hyped social media messages can bring when context is not fully considered and understood.

According to Lauren Wolfe of Foreign Policy magazine, “Understanding what has happened to the Nigerian girls and how to rescue them means beginning to face what has happened to hundreds of thousands, if not millions, of girls over years in global armed conflict.” To some critics, this hashtag trivializes the weaknesses of Nigerian democracy that have been exposed. Critics of using social media in advocacy campaigns have used the term “slacktivism” to describe the passive, minimal effort needed to participate in these movements. Others have cited such media waves being exploited for individual gain, as opposed to genuinely benefiting the girls. Florida State University Political Science professor, Will H. Moore, argues that this hashtag activism is not only hurting the larger cause of rescuing the kidnapped girls, but actually helping Boko Haram. Jumoke Balogun, Co-Founder of CompareAfrique, also highlights the limits of the #BringBackOurGirls hashtag impact.

Hashtag activism, alone, is not enough

With all this social media activity and international press, what actual progress has been made in rescuing the kidnapped girls? If the objective is raising awareness of the issue, yes, the hashtag has been successful. If the objective is to rescue the girls, we still have a long way to go, even if the hashtag campaign has been part of a multi-pronged approach to galvanize resources into action.

The bottom line: social media can be a powerful tool to bring visibility and awareness to a cause, but a hashtag alone is not enough to bring about social change. There are a myriad of resources that must be coordinated to effectively implement this rescue mission, which will only become more difficult as more time passes. However, prioritizing and shining a sustained light on the problem, instead getting distracted by competing media cycles on celebrities getting into petty fights, is the first step toward a solution…”

Rethinking Personal Data: A New Lens for Strengthening Trust


New report from the World Economic Forum: “As we look at the dynamic change shaping today’s data-driven world, one thing is becoming increasingly clear. We really do not know that much about it. Polarized along competing but fundamental principles, the global dialogue on personal data is inchoate and pulled in a variety of directions. It is complicated, conflated and often fueled by emotional reactions more than informed understandings.
The World Economic Forum’s global dialogue on personal data seeks to cut through this complexity. A multi-year initiative with global insights from the highest levels of leadership from industry, governments, civil society and academia, this work aims to articulate an ascendant vision of the value a balanced and human-centred personal data ecosystem can create.
Yet despite these aspirations, there is a crisis in trust. Concerns are voiced from a variety of viewpoints at a variety of scales. Industry, government and civil society are all uncertain on how to create a personal data ecosystem that is adaptive, reliable, trustworthy and fair.
The shared anxieties stem from the overwhelming challenge of transitioning into a hyperconnected world. The growth of data, the sophistication of ubiquitous computing and the borderless flow of data are all outstripping the ability to effectively govern on a global basis. We need the means to effectively uphold fundamental principles in ways fit for today’s world.
Yet despite the size and scope of the complexity, it cannot become a reason for inaction. The need for pragmatic and scalable approaches which strengthen transparency, accountability and the empowerment of individuals has become a global priority.
Tools are needed to answer fundamental questions: Who has the data? Where is the data? What is being done with it? All of these uncertainties need to be addressed for meaningful progress to occur.
Objectives need to be set. The benefits and harms for using personal data need be more precisely defined. The ambiguity surrounding privacy needs to be demystified and placed into a real-world context.
Individuals need to be meaningfully empowered. Better engagement over how data is used by third parties is one opportunity for strengthening trust. Supporting the ability for individuals to use personal data for their own purposes is another area for innovation and growth. But combined, the overall lack of engagement is undermining trust.
Collaboration is essential. The need for interdisciplinary collaboration between technologists, business leaders, social scientists, economists and policy-makers is vital. The complexities for delivering a sustainable and balanced personal data ecosystem require that these multifaceted perspectives are all taken into consideration.
With a new lens for using personal data, progress can occur.

Figure 1: A new lens for strengthening trust
 

Source: World Economic Forum

Believe the hype: Big data can have a big social impact


Annika Small at the Guardian: “Given all the hype around so called big data at the moment, it would be easy to dismiss it as nothing more than the latest technology buzzword. This would be a mistake, given that the application and interpretation of huge – often publicly available – data sets is already supporting new models of creativity, innovation and engagement.
To date, stories of big data’s progress and successes have tended to come from government and the private sector, but we’ve heard little about its relevance to social organisations. Yet big data can fuel big social change.
It’s already playing a vital role in the charitable sector. Some social organisations are using existing open government data to better target their services, to improve advocacy and fundraising, and to support knowledge sharing and collaboration between different charities and agencies. Crowdsourcing of open data also offers a new way for not-for-profits to gather intelligence, and there is a wide range of freely available online tools to help them analyse the information.
However, realising the potential of big and open data presents a number of technical and organisational challenges for social organisations. Many don’t have the required skills, awareness and investment to turn big data to their advantage. They also tend to lack the access to examples that might help demystify the technicalities and focus on achievable results.
Overcoming these challenges can be surprisingly simple: Keyfund, for example, gained insight into what made for a successful application to their scheme through using a free, online tool to create word clouds out of all the text in their application forms. Many social organisations could use this same technique to better understand the large volume of unstructured text that they accumulate – in doing so, they would be “doing big data” (albeit in a small way). At the other end of the scale, Global Giving has developed its own sophisticated set of analytical tools to better understand the 57,000+ “stories” gathered from its network.
Innovation often happens when different disciplines collide and it’s becoming apparent that most value – certainly most social value – is likely to be created at the intersection of government, private and social sector data. That could be the combination of data from different sectors, or better “data collaboration” within sectors.
The Housing Association Charitable Trust (HACT) has produced two original tools that demonstrate this. Its Community Insight tool combines data from different sectors, allowing housing providers easily to match information about their stock to a large store of well-maintained open government figures. Meanwhile, its Housing Big Data programme is building a huge dataset by combining stats from 16 different housing providers across the UK. While Community Insight allows each organisation to gain better individual understanding of their communities (measuring well-being and deprivation levels, tracking changes over time, identifying hotspots of acute need), Housing Big Data is making progress towards a much richer network of understanding, providing a foundation for the sector to collaboratively identify challenges and quantify the impact of their interventions.
Alongside this specific initiative from HACT, it’s also exciting to see programmes such as 360giving, which forge connections between a range of private and social enterprises, and lays foundations for UK social investors to be a significant source of information over the next decade. Certainly, The Big Lottery Fund’s publication of open data late last year is a milestone which also highlights how far we have to travel as a sector before we are truly “data-rich”.
At Nominet Trust, we have produced the Social Tech Guide to demonstrate the scale and diversity of social value being generated internationally – much of which is achieved through harnessing the power of big data. From Knewton creating personally tailored learning programmes, to Cellslider using the power of the crowd to advance cancer research, there is no shortage of inspiration. The UN’s Global Pulse programme is another great example, with its focus on how we can combine private and public sources to pin down the size and shape of a social challenge, and calibrate our collective response.
These examples of data-driven social change demonstrate the huge opportunities for social enterprises to harness technology to generate insights, to drive more effective action and to fuel social change. If we are to realise this potential, we need to continue to stretch ourselves as social enterprises and social investors.”

Working Together in a Networked Economy


Yochai Benkler at MIT Technology Review on Distributed Innovation and Creativity, Peer Production, and Commons in a Networked Economy: “A decade ago, Wikipedia and open-source software were treated as mere curiosities in business circles. Today, these innovations represent a core challenge to how we have thought about property and contract, organization theory and management, over the past 150 years.
For the first time since before the Industrial Revolution, the most important inputs into some of the most important economic sectors are radically distributed in the population, and the core capital resources necessary for these economic activities have become widely available in wealthy countries and among the wealthier populations of emerging economies. This technological feasibility of social production generally, and peer production — the kind of network collaboration of which Wikipedia is the most prominent example — more specifically, is interacting with the high rate of change and the escalating complexity of global innovation and production systems.
Increasingly, in the business literature and practice, we see a shift toward a range of open innovation and models that allow more fluid flows of information, talent, and projects across organizations.
Peer production, the most significant organizational innovation that has emerged from Internet-mediated social practice, is large-scale collaborative engagement by groups of individuals who come together to produce products more complex than they could have produced on their own. Organizationally, it combines three core characteristics: decentralization of conception and execution of problems and solutions; harnessing of diverse motivations; and separation of governance and management from property and contract.
These characteristics make peer production highly adept at experimentation, innovation, and adaptation in changing and complex environments. If the Web was innovation on a commons-based model — allocating access and use rights in resources without giving anyone exclusive rights to exclude anyone else — Wikipedia’s organizational innovation is in problem-solving.
Wikipedia’s user-generated content model incorporates knowledge that simply cannot be managed well, either because it is tacit knowledge (possessed by individuals but difficult to communicate to others) or because it is spread among too many people to contract for. The user-generated content model also permits organizations to explore a space of highly diverse interests and tastes that was too costly for traditional organizations to explore.
Peer production allows a diverse range of people, regardless of affiliation, to dynamically assess and reassess available resources, projects, and potential collaborators and to self-assign to projects and collaborations. By leaving these elements to self-organization dynamics, peer production overcomes the lossiness of markets and bureaucracies, and its benefits are sufficient that the practice has been widely adopted by firms and even governments.
In a networked information economy, commons-based practices and open innovation provide an evolutionary model typified by repeated experimentation and adoption of successful adaptation rather than the more traditional, engineering-style approaches to building optimized systems.
Commons-based production and peer production are edge cases of a broader range of openness strategies that trade off the freedom of these two approaches and the manageability and appropriability that many more-traditional organizations seek to preserve. Some firms are using competitions and prizes to diversify the range of people who work on their problems, without ceding contractual control over the project. Many corporations are participating in networks of firms engaging in a range of open collaborative innovation practices with a more manageable set of people, resources, and projects to work with than a fully open-to-the-world project. And the innovation clusters anchored around universities represent an entrepreneurial model at the edge of academia and business, in which academia allows for investment in highly uncertain innovation, and the firms allow for high-risk, high-reward investment models.

To read the full article,  click here.

Continued Progress and Plans for Open Government Data


Steve VanRoekel, and Todd Park at the White House:  “One year ago today, President Obama signed an executive order that made open and machine-readable data the new default for government information. This historic step is helping to make government-held data more accessible to the public and to entrepreneurs while appropriately safeguarding sensitive information and rigorously protecting privacy.
Freely available data from the U.S. government is an important national resource, serving as fuel for entrepreneurship, innovation, scientific discovery, and economic growth. Making information about government operations more readily available and useful is also core to the promise of a more efficient and transparent government. This initiative is a key component of the President’s Management Agenda and our efforts to ensure the government is acting as an engine to expand economic growth and opportunity for all Americans. The Administration is committed to driving further progress in this area, including by designating Open Data as one of our key Cross-Agency Priority Goals.
Over the past few years, the Administration has launched a number of Open Data Initiatives aimed at scaling up open data efforts across the Health, Energy, Climate, Education, Finance, Public Safety, and Global Development sectors. The White House has also launched Project Open Data, designed to share best practices, examples, and software code to assist federal agencies with opening data. These efforts have helped unlock troves of valuable data—that taxpayers have already paid for—and are making these resources more open and accessible to innovators and the public.
Other countries are also opening up their data. In June 2013, President Obama and other G7 leaders endorsed the Open Data Charter, in which the United States committed to publish a roadmap for our nation’s approach to releasing and improving government data for the public.
Building upon the Administration’s Open Data progress, and in fulfillment of the Open Data Charter, today we are excited to release the U.S. Open Data Action Plan. The plan includes a number of exciting enhancements and new data releases planned in 2014 and 2015, including:

  • Small Business Data: The Small Business Administration’s (SBA) database of small business suppliers will be enhanced so that software developers can create tools to help manufacturers more easily find qualified U.S. suppliers, ultimately reducing the transaction costs to source products and manufacture domestically.
  • Smithsonian American Art Museum Collection: The Smithsonian American Art Museum’s entire digitized collection will be opened to software developers to make educational apps and tools. Today, even museum curators do not have easily accessible information about their art collections. This information will soon be available to everyone.
  • FDA Adverse Drug Event Data: Each year, healthcare professionals and consumers submit millions of individual reports on drug safety to the Food and Drug Administration (FDA). These anonymous reports are a critical tool to support drug safety surveillance. Today, this data is only available through limited quarterly reports. But the Administration will soon be making these reports available in their entirety so that software developers can build tools to help pull potentially dangerous drugs off shelves faster than ever before.

We look forward to implementing the U.S. Open Data Action Plan, and to continuing to work with our partner countries in the G7 to take the open data movement global”.

Can Big Data Stop Wars Before They Happen?


Foreign Policy: “It has been almost two decades exactly since conflict prevention shot to the top of the peace-building agenda, as large-scale killings shifted from interstate wars to intrastate and intergroup conflicts. What could we have done to anticipate and prevent the 100 days of genocidal killing in Rwanda that began in April 1994 or the massacre of thousands of Bosnian Muslims at Srebrenica just over a year later? The international community recognized that conflict prevention could no longer be limited to diplomatic and military initiatives, but that it also requires earlier intervention to address the causes of violence between nonstate actors, including tribal, religious, economic, and resource-based tensions.
For years, even as it was pursued as doggedly as personnel and funding allowed, early intervention remained elusive, a kind of Holy Grail for peace-builders. This might finally be changing. The rise of data on social dynamics and what people think and feel — obtained through social media, SMS questionnaires, increasingly comprehensive satellite information, news-scraping apps, and more — has given the peace-building field hope of harnessing a new vision of the world. But to cash in on that hope, we first need to figure out how to understand all the numbers and charts and figures now available to us. Only then can we expect to predict and prevent events like the recent massacres in South Sudan or the ongoing violence in the Central African Republic.
A growing number of initiatives have tried to make it across the bridge between data and understanding. They’ve ranged from small nonprofit shops of a few people to massive government-funded institutions, and they’ve been moving forward in fits and starts. Few of these initiatives have been successful in documenting incidents of violence actually averted or stopped. Sometimes that’s simply because violence or absence of it isn’t verifiable. The growing literature on big data and conflict prevention today is replete with caveats about “overpromising and underdelivering” and the persistent gap between early warning and early action. In the case of the Conflict Early Warning and Response Mechanism (CEWARN) system in central Africa — one of the earlier and most prominent attempts at early intervention — it is widely accepted that the project largely failed to use the data it retrieved for effective conflict management. It relied heavily on technology to produce large databases, while lacking the personnel to effectively analyze them or take meaningful early action.
To be sure, disappointments are to be expected when breaking new ground. But they don’t have to continue forever. This pioneering work demands not just data and technology expertise. Also critical is cross-discipline collaboration between the data experts and the conflict experts, who know intimately the social, political, and geographic terrain of different locations. What was once a clash of cultures over the value and meaning of metrics when it comes to complex human dynamics needs to morph into collaboration. This is still pretty rare, but if the past decade’s innovations are any prologue, we are hopefully headed in the right direction.
* * *
Over the last three years, the U.S. Defense Department, the United Nations, and the CIA have all launched programs to parse the masses of public data now available, scraping and analyzing details from social media, blogs, market data, and myriad other sources to achieve variations of the same goal: anticipating when and where conflict might arise. The Defense Department’s Information Volume and Velocity program is designed to use “pattern recognition to detect trends in a sea of unstructured data” that would point to growing instability. The U.N.’s Global Pulse initiative’s stated goal is to track “human well-being and emerging vulnerabilities in real-time, in order to better protect populations from shocks.” The Open Source Indicators program at the CIA’s Intelligence Advanced Research Projects Activity aims to anticipate “political crises, disease outbreaks, economic instability, resource shortages, and natural disasters.” Each looks to the growing stream of public data to detect significant population-level changes.
Large institutions with deep pockets have always been at the forefront of efforts in the international security field to design systems for improving data-driven decision-making. They’ve followed the lead of large private-sector organizations where data and analytics rose to the top of the corporate agenda. (In that sector, the data revolution is promising “to transform the way many companies do business, delivering performance improvements not seen since the redesign of core processes in the 1990s,” as David Court, a director at consulting firm McKinsey, has put it.)
What really defines the recent data revolution in peace-building, however, is that it is transcending size and resource limitations. It is finding its way to small organizations operating at local levels and using knowledge and subject experts to parse information from the ground. It is transforming the way peace-builders do business, delivering data-led programs and evidence-based decision-making not seen since the field’s inception in the latter half of the 20th century.
One of the most famous recent examples is the 2013 Kenyan presidential election.
In March 2013, the world was watching and waiting to see whether the vote would produce more of the violence that had left at least 1,300 people dead and 600,000 homeless during and after 2010 elections. In the intervening years, a web of NGOs worked to set up early-warning and early-response mechanisms to defuse tribal rivalries, party passions, and rumor-mongering. Many of the projects were technology-based initiatives trying to leverage data sources in new ways — including a collaborative effort spearheaded and facilitated by a Kenyan nonprofit called Ushahidi (“witness” in Swahili) that designs open-source data collection and mapping software. The Umati (meaning “crowd”) project used an Ushahidi program to monitor media reports, tweets, and blog posts to detect rising tensions, frustration, calls to violence, and hate speech — and then sorted and categorized it all on one central platform. The information fed into election-monitoring maps built by the Ushahidi team, while mobile-phone provider Safaricom donated 50 million text messages to a local peace-building organization, Sisi ni Amani (“We are Peace”), so that it could act on the information by sending texts — which had been used to incite and fuel violence during the 2007 elections — aimed at preventing violence and quelling rumors.
The first challenges came around 10 a.m. on the opening day of voting. “Rowdy youth overpowered police at a polling station in Dandora Phase 4,” one of the informal settlements in Nairobi that had been a site of violence in 2007, wrote Neelam Verjee, programs manager at Sisi ni Amani. The young men were blocking others from voting, and “the situation was tense.”
Sisi ni Amani sent a text blast to its subscribers: “When we maintain peace, we will have joy & be happy to spend time with friends & family but violence spoils all these good things. Tudumishe amani [“Maintain the peace”] Phase 4.” Meanwhile, security officers, who had been called separately, arrived at the scene and took control of the polling station. Voting resumed with little violence. According to interviews collected by Sisi ni Amani after the vote, the message “was sent at the right time” and “helped to calm down the situation.”
In many ways, Kenya’s experience is the story of peace-building today: Data is changing the way professionals in the field think about anticipating events, planning interventions, and assessing what worked and what didn’t. But it also underscores the possibility that we might be edging closer to a time when peace-builders at every level and in all sectors — international, state, and local, governmental and not — will have mechanisms both to know about brewing violence and to save lives by acting on that knowledge.
Three important trends underlie the optimism. The first is the sheer amount of data that we’re generating. In 2012, humans plugged into digital devices managed to generate more data in a single year than over the course of world history — and that rate more than doubles every year. As of 2012, 2.4 billion people — 34 percent of the world’s population — had a direct Internet connection. The growth is most stunning in regions like the Middle East and Africa where conflict abounds; access has grown 2,634 percent and 3,607 percent, respectively, in the last decade.
The growth of mobile-phone subscriptions, which allow their owners to be part of new data sources without a direct Internet connection, is also staggering. In 2013, there were almost as many cell-phone subscriptions in the world as there were people. In Africa, there were 63 subscriptions per 100 people, and there were 105 per 100 people in the Arab states.
The second trend has to do with our expanded capacity to collect and crunch data. Not only do we have more computing power enabling us to produce enormous new data sets — such as the Global Database of Events, Language, and Tone (GDELT) project, which tracks almost 300 million conflict-relevant events reported in the media between 1979 and today — but we are also developing more-sophisticated methodological approaches to using these data as raw material for conflict prediction. New machine-learning methodologies, which use algorithms to make predictions (like a spam filter, but much, much more advanced), can provide “substantial improvements in accuracy and performance” in anticipating violent outbreaks, according to Chris Perry, a data scientist at the International Peace Institute.
This brings us to the third trend: the nature of the data itself. When it comes to conflict prevention and peace-building, progress is not simply a question of “more” data, but also different data. For the first time, digital media — user-generated content and online social networks in particular — tell us not just what is going on, but also what people think about the things that are going on. Excitement in the peace-building field centers on the possibility that we can tap into data sets to understand, and preempt, the human sentiment that underlies violent conflict.
Realizing the full potential of these three trends means figuring out how to distinguish between the information, which abounds, and the insights, which are actionable. It is a distinction that is especially hard to make because it requires cross-discipline expertise that combines the wherewithal of data scientists with that of social scientists and the knowledge of technologists with the insights of conflict experts.

How Helsinki Became the Most Successful Open-Data City in the World


Olli Sulopuisto in Atlantic Cities:  “If there’s something you’d like to know about Helsinki, someone in the city administration most likely has the answer. For more than a century, this city has funded its own statistics bureaus to keep data on the population, businesses, building permits, and most other things you can think of. Today, that information is stored and freely available on the internet by an appropriately named agency, City of Helsinki Urban Facts.
There’s a potential problem, though. Helsinki may be Finland’s capital and largest city, with 620,000 people. But it’s only one of more than a dozen municipalities in a metropolitan area of almost 1.5 million. So in terms of urban data, if you’re only looking at Helsinki, you’re missing out on more than half of the picture.
Helsinki and three of its neighboring cities are now banding together to solve that problem. Through an entity called Helsinki Region Infoshare, they are bringing together their data so that a fuller picture of the metro area can come into view.
That’s not all. At the same time these datasets are going regional, they’re also going “open.” Helsinki Region Infoshare publishes all of its data in formats that make it easy for software developers, researchers, journalists and others to analyze, combine or turn into web-based or mobile applications that citizens may find useful. In four years of operation, the project has produced more than 1,000 “machine-readable” data sources such as a map of traffic noise levels, real-time locations of snow plows, and a database of corporate taxes.
A global leader
All of this has put the Helsinki region at the forefront of the open-data movement that is sweeping cities across much of the world. The concept is that all kinds of good things can come from assembling city data, standardizing it and publishing it for free. Last month, Helsinki Region Infoshare was presented with the European Commission’s prize for innovation in public administration.

The project is creating transparency in government and a new digital commons. It’s also fueling a small industry of third-party application developers who take all this data and turn it into consumer products.
For example, Helsinki’s city council has a paperless system called Ahjo for handling its agenda items, minutes and exhibits that accompany council debates. Recently, the datasets underlying Ahjo were opened up. The city built a web-based interface for browsing the documents, but a software developer who doesn’t even live in Helsinki created a smartphone app for it. Now anyone who wants to keep up with just about any decision Helsinki’s leaders have before them can do so easily.
Another example is a product called BlindSquare, a smartphone app that helps blind people navigate the city. An app developer took the Helsinki region’s data on public transport and services, and mashed it up with location data from the social networking app Foursquare as well as mapping tools and the GPS and artificial voice capabilities of new smartphones. The product now works in dozens of countries and languages and sells for about €17 ($24 U.S.)

Helsinki also runs competitions for developers who create apps with public-sector data. That’s nothing new — BlindSquare won the Apps4Finland and European OpenCities app challenges in 2012. But this year, they’re trying a new approach to the app challenge concept, funded by the European Commission’s prize money and Sitra.
It’s called Datademo. Instead of looking for polished but perhaps random apps to heap fame and prize money on, Datademo is trying to get developers to aim their creative energies toward general goals city leaders think are important. The current competition specifies that apps have to use open data from the Helsinki region or from Finland to make it easier for citizens to find information and participate in democracy. The competition also gives developers seed funding upfront.
Datademo received more than 40 applications in its first round. Of those, the eight best suggestions were given three months and €2,000 ($2,770 U.S) to implement their ideas. The same process will be repeated two times, resulting in dozens of new app ideas that will get a total of €48,000 ($66,000 U.S.) in development subsidies. Keeping with the spirit of transparency, the voting and judging process is open to all who submit an idea for each round….”

The advent of crowdfunding innovations for development


SciDevNet: “FundaGeek, TechMoola and RocketHub have more in common than just their curious names. These are all the monikers of crowdsourcing websites that are dedicated to raising money for science and technology projects. As the coffers that were traditionally used to fund research and development have been squeezed in recent years, several such sites have sprouted up.
In 2013, general crowdsourcing site Kickstarter saw a total of US$480 million pledged to its projects by three million backers. That’s up from US$320 million in 2012, US$99 million in 2011 and just US$28million in 2010. Kickstarter expects the figures to climb further this year, and not just for popular projects such as films and books.
Science and technology projects — particularly those involving simple designs — are starting to make waves on these sites. And new sites, such as those bizarrely named ones, are now catering specifically for scientific projects, widening the choice of platforms on offer and raising crowdsourcing’s profile among the global scientific community online.
All this means that crowdsourcing is fast becoming one of the most significant innovations in funding the development of technology that can aid poor communities….
A good example of how crowdsourcing can help the developing world is the GravityLight, a product launched on Indiegogo over a year ago that uses gravity to create light. Not only did UK design company Therefore massively exceed its initial funding target — ultimately raising $US400,000 instead of a planned US$55,000 — it amassed a global network of investors and distributors that has allowed the light to be trialled in 26 countries as of last December.
The light was developed in-house after Therefore was given a brief to produce a cheap solar-powered lamp by private clients. Although this project faltered, the team independently set out to produce a lamp to replace the ubiquitous and dangerous kerosene lamps widely used in remote areas in Africa. After several months of development, Therefore had designed a product that is powered by a rope with a heavy weight on its end being slowly drawn through the light’s gears (see video)…
Crowdfunding is not always related to a specific product. Earlier this year, Indiegogo hosted a project hoping to build a clean energy store in a Ugandan village. The idea is to create an ongoing supply chain for technologies such as cleaner-burning stoves, water filters and solar lights that will improve or save lives, according to ENVenture, the project’s creators. [1] The US$2,000 target was comfortably exceeded…”