Health plan giants to make payment data accessible to public


Paul Demko in ModernHealthCare: “A new initiative by three of the country’s largest health plans has the potential to transform the accessibility of claims payment data, according to healthcare finance experts. UnitedHealthcare, Aetna and Humana announced a partnership on Wednesday with the Health Care Cost Institute to create a payment database that will be available to the public for free. …The database will be created by HCCI, a not-for-profit group established in 2011, from information provided by the insurers. HCCI expects it to be available in 2015 and that more health plans will join the initiative prior to its launch.
UnitedHealthcare is the largest insurer in the country in terms of the number of individuals covered through its products. All three participating plans are publicly traded, for-profit companies.
Stephen Parente, chair of HCCI’s board, said the organization was approached by the insurance companies about the initiative. “I’m not quite sure what the magic trigger was,” said Parente, who is a professor at the University of Minnesota and advised John McCain’s 2008 presidential campaign on healthcare issues. “We’ve kind of proven as a nonprofit and an independent group that we can be trustworthy in working with their data.”
Experts say cost transparency is being spurred by a number of developments in the healthcare sector. The trend towards high-deductible plans is giving consumers a greater incentive to understand how much healthcare costs and to utilize it more efficiently. In addition, the launch of the exchanges under the Patient Protection and Affordable Care Act has brought unprecedented attention to the difficulties faced by individuals in shopping for insurance coverage.
“There’s so many things that are kind of pushing the industry toward this more transparent state,” Hempstead said. “There’s just this drumbeat that people want to have this information.”
Insurers may also be realizing they aren’t likely to have a choice about sharing payment information. In recent years, more and more states have passed laws requiring the creation of claims databases. Currently, 11 states have all payer claims databases, and six other states are in the process of creating such a resource, according to the All-Payer Claims Database Council….”

Rethinking Personal Data: A New Lens for Strengthening Trust


New report from the World Economic Forum: “As we look at the dynamic change shaping today’s data-driven world, one thing is becoming increasingly clear. We really do not know that much about it. Polarized along competing but fundamental principles, the global dialogue on personal data is inchoate and pulled in a variety of directions. It is complicated, conflated and often fueled by emotional reactions more than informed understandings.
The World Economic Forum’s global dialogue on personal data seeks to cut through this complexity. A multi-year initiative with global insights from the highest levels of leadership from industry, governments, civil society and academia, this work aims to articulate an ascendant vision of the value a balanced and human-centred personal data ecosystem can create.
Yet despite these aspirations, there is a crisis in trust. Concerns are voiced from a variety of viewpoints at a variety of scales. Industry, government and civil society are all uncertain on how to create a personal data ecosystem that is adaptive, reliable, trustworthy and fair.
The shared anxieties stem from the overwhelming challenge of transitioning into a hyperconnected world. The growth of data, the sophistication of ubiquitous computing and the borderless flow of data are all outstripping the ability to effectively govern on a global basis. We need the means to effectively uphold fundamental principles in ways fit for today’s world.
Yet despite the size and scope of the complexity, it cannot become a reason for inaction. The need for pragmatic and scalable approaches which strengthen transparency, accountability and the empowerment of individuals has become a global priority.
Tools are needed to answer fundamental questions: Who has the data? Where is the data? What is being done with it? All of these uncertainties need to be addressed for meaningful progress to occur.
Objectives need to be set. The benefits and harms for using personal data need be more precisely defined. The ambiguity surrounding privacy needs to be demystified and placed into a real-world context.
Individuals need to be meaningfully empowered. Better engagement over how data is used by third parties is one opportunity for strengthening trust. Supporting the ability for individuals to use personal data for their own purposes is another area for innovation and growth. But combined, the overall lack of engagement is undermining trust.
Collaboration is essential. The need for interdisciplinary collaboration between technologists, business leaders, social scientists, economists and policy-makers is vital. The complexities for delivering a sustainable and balanced personal data ecosystem require that these multifaceted perspectives are all taken into consideration.
With a new lens for using personal data, progress can occur.

Figure 1: A new lens for strengthening trust
 

Source: World Economic Forum

Obama Signs Nation's First 'Open Data' Law


William Welsh in Information Week: “President Barack Obama enacted the nation’s first open data law, signing into law on May 9 bipartisan legislation that requires federal agencies to publish their spending data in a standardized, machine-readable format that the public can access through USASpending.gov.
The Digital Accountability and Transparency Act of 2014 (S. 994) amends the eight-year-old Federal Funding Accountability and Transparency Act to make available to the public specific classes of federal agency spending data “with more specificity and at a deeper level than is currently reported,” a White House statement said….
Advocacy groups applauded the bipartisan legislation, which is being heralded the nation’s first open data law and furnishes a legislative mandate for Obama’s one-year-old Open Data Policy.
“The DATA Act will unlock a new public resource that innovators, watchdogs, and citizens can mine for valuable and unprecedented insight into federal spending,” said Hudson Hollister, executive director of the Data Transparency Coalition. “America’s tech sector already has the tools to deliver reliable, standardized, open data. [The] historic victory will put our nation’s open data pioneers to work for the common good.”
The DATA Act requires agencies to establish government-wide standards for financial data, adopt accounting approaches developed by the Recovery Act’s Recovery Accountability and Transparency Board (RATB), and streamline agency reporting requirements.
The DATA Act empowers the Secretary of the Treasury to establish a data analytics center, which is modeled on the successful Recovery Operations Center. The new center will support inspectors general and law enforcement agencies in criminal and other investigations, as well as agency program offices in the prevention of improper payments. Assets of the RATB related to the Recovery Operations Center would transfer to the Treasury Department when the board’s authorization expires.
The treasury secretary and the Director of the White House’s Office of Management and Budget are jointly tasked with establishing the standards required to achieve the goals and objectives of the new statute.
To ensure that agencies comply with the reporting requirements, agency inspectors general will report on the quality and accuracy of the financial data provided to USASpending.gov. The Government Accountability Office also will report on the data quality and accuracy and create a Government-wide assessment of the financial data reported…”

New crowdsourcing site like ‘Yelp’ for philanthropy


Vanessa Small in the Washington Post: “Billionaire investor Warren Buffett once said that there is no market test for philanthropy. Foundations with billions in assets often hand out giant grants to charity without critique. One watchdog group wants to change that.
The National Committee for Responsive Philanthropy has created a new Web site that posts public feedback about a foundation’s giving. Think Yelp for the philanthropy sector.
Along with public critiques, the new Web site, Philamplify.org, uploads a comprehensive assessment of a foundation conducted by researchers at the National Committee for Responsive Philanthropy.
The assessment includes a review of the foundation’s goals, strategies, partnerships with grantees, transparency, diversity in its board and how any investments support the mission.
The site also posts recommendations on what would make the foundation more effective in the community. The public can agree or disagree with each recommendation and then provide feedback about the grantmaker’s performance.
People who post to the site can remain anonymous.
NCRP officials hope the site will stir debate about the giving practices of foundations.
“Foundation leaders rarely get honest feedback because no one wants to get on the wrong side of a foundation,” said Lisa Ranghelli, a director at NCRP. “There’s so much we need to do as a society that we just want these philanthropic resources to be used as powerfully as possible and for everyone to feel like they have a voice in how philanthropy operates.”
With nonprofit rating sites such as Guidestar and Charity Navigator, Philamplify is just one more move to create more transparency in the nonprofit sector. But the site might be one of the first to force transparency and public commentary exclusively about the organizations that give grants.
Foundation leaders are open to the site, but say that some grantmakers already use various evaluation methods to improve their strategies.
Groups such as Grantmakers for Effective Organizations and the Center for Effective Philanthropy provide best practices for foundation giving.
The Council on Foundations, an Arlington-based membership organization of foundation groups, offers a list of tools and ideas for foundations to make their giving more effective.
“We will be paying close attention to Philamplify and new developments related to it as the project unfolds,” said Peter Panepento, senior vice president of community and knowledge at the Council on Foundations.
Currently there are three foundations up for review on the Web site: the William Penn Foundation in Philadelphia, which focuses on improving the Greater Philadelphia community; the Robert W. Woodruff Foundation in Atlanta, which gives grants in science and education; and the Lumina Foundation for Education in Indianapolis, which focuses on access to higher learning….”
Officials say Philamplify will focus on the top 100 largest foundations to start. Large foundations would include groups such as the Bill and Melinda Gates Foundation, the Robert Wood Johnson Foundation and Silicon Valley Community Foundation, and the foundations of companies such as Wal-Mart, Wells Fargo, Johnson & Johnson and GlaxoSmithKline.
Although there are concerns about the site’s ability to keep comments objective, grantees hope it will start a dialogue that has been absent in philanthropy.

Believe the hype: Big data can have a big social impact


Annika Small at the Guardian: “Given all the hype around so called big data at the moment, it would be easy to dismiss it as nothing more than the latest technology buzzword. This would be a mistake, given that the application and interpretation of huge – often publicly available – data sets is already supporting new models of creativity, innovation and engagement.
To date, stories of big data’s progress and successes have tended to come from government and the private sector, but we’ve heard little about its relevance to social organisations. Yet big data can fuel big social change.
It’s already playing a vital role in the charitable sector. Some social organisations are using existing open government data to better target their services, to improve advocacy and fundraising, and to support knowledge sharing and collaboration between different charities and agencies. Crowdsourcing of open data also offers a new way for not-for-profits to gather intelligence, and there is a wide range of freely available online tools to help them analyse the information.
However, realising the potential of big and open data presents a number of technical and organisational challenges for social organisations. Many don’t have the required skills, awareness and investment to turn big data to their advantage. They also tend to lack the access to examples that might help demystify the technicalities and focus on achievable results.
Overcoming these challenges can be surprisingly simple: Keyfund, for example, gained insight into what made for a successful application to their scheme through using a free, online tool to create word clouds out of all the text in their application forms. Many social organisations could use this same technique to better understand the large volume of unstructured text that they accumulate – in doing so, they would be “doing big data” (albeit in a small way). At the other end of the scale, Global Giving has developed its own sophisticated set of analytical tools to better understand the 57,000+ “stories” gathered from its network.
Innovation often happens when different disciplines collide and it’s becoming apparent that most value – certainly most social value – is likely to be created at the intersection of government, private and social sector data. That could be the combination of data from different sectors, or better “data collaboration” within sectors.
The Housing Association Charitable Trust (HACT) has produced two original tools that demonstrate this. Its Community Insight tool combines data from different sectors, allowing housing providers easily to match information about their stock to a large store of well-maintained open government figures. Meanwhile, its Housing Big Data programme is building a huge dataset by combining stats from 16 different housing providers across the UK. While Community Insight allows each organisation to gain better individual understanding of their communities (measuring well-being and deprivation levels, tracking changes over time, identifying hotspots of acute need), Housing Big Data is making progress towards a much richer network of understanding, providing a foundation for the sector to collaboratively identify challenges and quantify the impact of their interventions.
Alongside this specific initiative from HACT, it’s also exciting to see programmes such as 360giving, which forge connections between a range of private and social enterprises, and lays foundations for UK social investors to be a significant source of information over the next decade. Certainly, The Big Lottery Fund’s publication of open data late last year is a milestone which also highlights how far we have to travel as a sector before we are truly “data-rich”.
At Nominet Trust, we have produced the Social Tech Guide to demonstrate the scale and diversity of social value being generated internationally – much of which is achieved through harnessing the power of big data. From Knewton creating personally tailored learning programmes, to Cellslider using the power of the crowd to advance cancer research, there is no shortage of inspiration. The UN’s Global Pulse programme is another great example, with its focus on how we can combine private and public sources to pin down the size and shape of a social challenge, and calibrate our collective response.
These examples of data-driven social change demonstrate the huge opportunities for social enterprises to harness technology to generate insights, to drive more effective action and to fuel social change. If we are to realise this potential, we need to continue to stretch ourselves as social enterprises and social investors.”

The solutions to all our problems may be buried in PDFs that nobody reads


Christopher Ingraham at the Washington Post: “What if someone had already figured out the answers to the world’s most pressing policy problems, but those solutions were buried deep in a PDF, somewhere nobody will ever read them?
According to a recent report by the World Bank, that scenario is not so far-fetched. The bank is one of those high-minded organizations — Washington is full of them — that release hundreds, maybe thousands, of reports a year on policy issues big and small. Many of these reports are long and highly technical, and just about all of them get released to the world as a PDF report posted to the organization’s Web site.
The World Bank recently decided to ask an important question: Is anyone actually reading these things? They dug into their Web site traffic data and came to the following conclusions: Nearly one-third of their PDF reports had never been downloaded, not even once. Another 40 percent of their reports had been downloaded fewer than 100 times. Only 13 percent had seen more than 250 downloads in their lifetimes. Since most World Bank reports have a stated objective of informing public debate or government policy, this seems like a pretty lousy track record.
pdfs
Now, granted, the bank isn’t Buzzfeed. It wouldn’t be reasonable to expect thousands of downloads for reports with titles like “Detecting Urban Expansion and Land Tenure Security Assessment: The Case of Bahir Dar and Debre Markos Peri-Urban Areas of Ethiopia.” Moreover, downloads aren’t the be-all and end-all of information dissemination; many of these reports probably get some distribution by e-mail, or are printed and handed out at conferences. Still, it’s fair to assume that many big-idea reports with lofty goals to elevate the public discourse never get read by anyone other than the report writer and maybe an editor or two. Maybe the author’s spouse. Or mom.
I’m not picking on the World Bank here. In fact, they’re to be commended, strongly, for not only taking a serious look at the question but making their findings public for the rest of us to learn from. And don’t think for a second that this is just a World Bank problem. PDF reports are basically the bread and butter of Washington’s huge think tank industry, for instance. Every single one of these groups should be taking a serious look at their own PDF analytics the way the bank has.
Government agencies are also addicted to the PDF. As The Washington Post’s David Fahrenthold reported this week, federal agencies spend thousands of dollars and employee-hours each year producing Congressionally-mandated reports that nobody reads. And let’s not even get started on the situation in academia, where the country’s best and brightest compete for the honor of seeing their life’s work locked away behind some publisher’s paywall.”
Not every policy report is going to be a game-changer, of course. But the sheer numbers dictate that there are probably a lot of really, really good ideas out there that never see the light of day. This seems like an inefficient way for the policy community to do business, but what’s the alternative?
One final irony to ponder: You know that World Bank report, about how nobody reads its PDFs? It’s only available as a PDF. Given the attention it’s receiving, it may also be one of their most-downloaded reports ever.

Working Together in a Networked Economy


Yochai Benkler at MIT Technology Review on Distributed Innovation and Creativity, Peer Production, and Commons in a Networked Economy: “A decade ago, Wikipedia and open-source software were treated as mere curiosities in business circles. Today, these innovations represent a core challenge to how we have thought about property and contract, organization theory and management, over the past 150 years.
For the first time since before the Industrial Revolution, the most important inputs into some of the most important economic sectors are radically distributed in the population, and the core capital resources necessary for these economic activities have become widely available in wealthy countries and among the wealthier populations of emerging economies. This technological feasibility of social production generally, and peer production — the kind of network collaboration of which Wikipedia is the most prominent example — more specifically, is interacting with the high rate of change and the escalating complexity of global innovation and production systems.
Increasingly, in the business literature and practice, we see a shift toward a range of open innovation and models that allow more fluid flows of information, talent, and projects across organizations.
Peer production, the most significant organizational innovation that has emerged from Internet-mediated social practice, is large-scale collaborative engagement by groups of individuals who come together to produce products more complex than they could have produced on their own. Organizationally, it combines three core characteristics: decentralization of conception and execution of problems and solutions; harnessing of diverse motivations; and separation of governance and management from property and contract.
These characteristics make peer production highly adept at experimentation, innovation, and adaptation in changing and complex environments. If the Web was innovation on a commons-based model — allocating access and use rights in resources without giving anyone exclusive rights to exclude anyone else — Wikipedia’s organizational innovation is in problem-solving.
Wikipedia’s user-generated content model incorporates knowledge that simply cannot be managed well, either because it is tacit knowledge (possessed by individuals but difficult to communicate to others) or because it is spread among too many people to contract for. The user-generated content model also permits organizations to explore a space of highly diverse interests and tastes that was too costly for traditional organizations to explore.
Peer production allows a diverse range of people, regardless of affiliation, to dynamically assess and reassess available resources, projects, and potential collaborators and to self-assign to projects and collaborations. By leaving these elements to self-organization dynamics, peer production overcomes the lossiness of markets and bureaucracies, and its benefits are sufficient that the practice has been widely adopted by firms and even governments.
In a networked information economy, commons-based practices and open innovation provide an evolutionary model typified by repeated experimentation and adoption of successful adaptation rather than the more traditional, engineering-style approaches to building optimized systems.
Commons-based production and peer production are edge cases of a broader range of openness strategies that trade off the freedom of these two approaches and the manageability and appropriability that many more-traditional organizations seek to preserve. Some firms are using competitions and prizes to diversify the range of people who work on their problems, without ceding contractual control over the project. Many corporations are participating in networks of firms engaging in a range of open collaborative innovation practices with a more manageable set of people, resources, and projects to work with than a fully open-to-the-world project. And the innovation clusters anchored around universities represent an entrepreneurial model at the edge of academia and business, in which academia allows for investment in highly uncertain innovation, and the firms allow for high-risk, high-reward investment models.

To read the full article,  click here.

New Technologies in Constitution Making


Special Report of the US Institute of Peace by Jason Gluck and Brendon Ballou: “Summary…

  • Public participation has become an integral part of constitution making, particularly since the end of the Cold War. It has strengthened national unity, built trust between governments and citizens, promoted reconciliation, and helped produce national consensus.
  • Constitution drafters in the past were mostly limited to using official statements and press releases, workshops, meetings, radio and television programs, and printed materials to engage with citizens. These methods were often costly and time-consuming, and failed to reach significant segments of the public.
  • New technologies can increase participation in and the perceived legitimacy of constitutional processes.
  • Constitution drafters have recently begun using the web and mobile phones to educate citizens on the constitution-writing process and engage them on issues of concern. Increasingly constitution writers are also using the web to consult international experts on specific technical issues.
  • Given the rapid growth of the Internet and mobile phone penetration in the developing world, the increased use of new technologies in constitution writing is nearly inevitable.
  • People and organizations considering using these tools should bear four things in mind. New technologies will affect different groups differently. The people who use these tools should respect social and cultural norms. They should keep control of the process in the hands of national actors. Last, they should fit their work within the larger context of the conflict or postconflict environment in which they work.
  • Constitution making is a difficult field, however, and new technologies are tools, not panaceas”

Continued Progress and Plans for Open Government Data


Steve VanRoekel, and Todd Park at the White House:  “One year ago today, President Obama signed an executive order that made open and machine-readable data the new default for government information. This historic step is helping to make government-held data more accessible to the public and to entrepreneurs while appropriately safeguarding sensitive information and rigorously protecting privacy.
Freely available data from the U.S. government is an important national resource, serving as fuel for entrepreneurship, innovation, scientific discovery, and economic growth. Making information about government operations more readily available and useful is also core to the promise of a more efficient and transparent government. This initiative is a key component of the President’s Management Agenda and our efforts to ensure the government is acting as an engine to expand economic growth and opportunity for all Americans. The Administration is committed to driving further progress in this area, including by designating Open Data as one of our key Cross-Agency Priority Goals.
Over the past few years, the Administration has launched a number of Open Data Initiatives aimed at scaling up open data efforts across the Health, Energy, Climate, Education, Finance, Public Safety, and Global Development sectors. The White House has also launched Project Open Data, designed to share best practices, examples, and software code to assist federal agencies with opening data. These efforts have helped unlock troves of valuable data—that taxpayers have already paid for—and are making these resources more open and accessible to innovators and the public.
Other countries are also opening up their data. In June 2013, President Obama and other G7 leaders endorsed the Open Data Charter, in which the United States committed to publish a roadmap for our nation’s approach to releasing and improving government data for the public.
Building upon the Administration’s Open Data progress, and in fulfillment of the Open Data Charter, today we are excited to release the U.S. Open Data Action Plan. The plan includes a number of exciting enhancements and new data releases planned in 2014 and 2015, including:

  • Small Business Data: The Small Business Administration’s (SBA) database of small business suppliers will be enhanced so that software developers can create tools to help manufacturers more easily find qualified U.S. suppliers, ultimately reducing the transaction costs to source products and manufacture domestically.
  • Smithsonian American Art Museum Collection: The Smithsonian American Art Museum’s entire digitized collection will be opened to software developers to make educational apps and tools. Today, even museum curators do not have easily accessible information about their art collections. This information will soon be available to everyone.
  • FDA Adverse Drug Event Data: Each year, healthcare professionals and consumers submit millions of individual reports on drug safety to the Food and Drug Administration (FDA). These anonymous reports are a critical tool to support drug safety surveillance. Today, this data is only available through limited quarterly reports. But the Administration will soon be making these reports available in their entirety so that software developers can build tools to help pull potentially dangerous drugs off shelves faster than ever before.

We look forward to implementing the U.S. Open Data Action Plan, and to continuing to work with our partner countries in the G7 to take the open data movement global”.

Can Big Data Stop Wars Before They Happen?


Foreign Policy: “It has been almost two decades exactly since conflict prevention shot to the top of the peace-building agenda, as large-scale killings shifted from interstate wars to intrastate and intergroup conflicts. What could we have done to anticipate and prevent the 100 days of genocidal killing in Rwanda that began in April 1994 or the massacre of thousands of Bosnian Muslims at Srebrenica just over a year later? The international community recognized that conflict prevention could no longer be limited to diplomatic and military initiatives, but that it also requires earlier intervention to address the causes of violence between nonstate actors, including tribal, religious, economic, and resource-based tensions.
For years, even as it was pursued as doggedly as personnel and funding allowed, early intervention remained elusive, a kind of Holy Grail for peace-builders. This might finally be changing. The rise of data on social dynamics and what people think and feel — obtained through social media, SMS questionnaires, increasingly comprehensive satellite information, news-scraping apps, and more — has given the peace-building field hope of harnessing a new vision of the world. But to cash in on that hope, we first need to figure out how to understand all the numbers and charts and figures now available to us. Only then can we expect to predict and prevent events like the recent massacres in South Sudan or the ongoing violence in the Central African Republic.
A growing number of initiatives have tried to make it across the bridge between data and understanding. They’ve ranged from small nonprofit shops of a few people to massive government-funded institutions, and they’ve been moving forward in fits and starts. Few of these initiatives have been successful in documenting incidents of violence actually averted or stopped. Sometimes that’s simply because violence or absence of it isn’t verifiable. The growing literature on big data and conflict prevention today is replete with caveats about “overpromising and underdelivering” and the persistent gap between early warning and early action. In the case of the Conflict Early Warning and Response Mechanism (CEWARN) system in central Africa — one of the earlier and most prominent attempts at early intervention — it is widely accepted that the project largely failed to use the data it retrieved for effective conflict management. It relied heavily on technology to produce large databases, while lacking the personnel to effectively analyze them or take meaningful early action.
To be sure, disappointments are to be expected when breaking new ground. But they don’t have to continue forever. This pioneering work demands not just data and technology expertise. Also critical is cross-discipline collaboration between the data experts and the conflict experts, who know intimately the social, political, and geographic terrain of different locations. What was once a clash of cultures over the value and meaning of metrics when it comes to complex human dynamics needs to morph into collaboration. This is still pretty rare, but if the past decade’s innovations are any prologue, we are hopefully headed in the right direction.
* * *
Over the last three years, the U.S. Defense Department, the United Nations, and the CIA have all launched programs to parse the masses of public data now available, scraping and analyzing details from social media, blogs, market data, and myriad other sources to achieve variations of the same goal: anticipating when and where conflict might arise. The Defense Department’s Information Volume and Velocity program is designed to use “pattern recognition to detect trends in a sea of unstructured data” that would point to growing instability. The U.N.’s Global Pulse initiative’s stated goal is to track “human well-being and emerging vulnerabilities in real-time, in order to better protect populations from shocks.” The Open Source Indicators program at the CIA’s Intelligence Advanced Research Projects Activity aims to anticipate “political crises, disease outbreaks, economic instability, resource shortages, and natural disasters.” Each looks to the growing stream of public data to detect significant population-level changes.
Large institutions with deep pockets have always been at the forefront of efforts in the international security field to design systems for improving data-driven decision-making. They’ve followed the lead of large private-sector organizations where data and analytics rose to the top of the corporate agenda. (In that sector, the data revolution is promising “to transform the way many companies do business, delivering performance improvements not seen since the redesign of core processes in the 1990s,” as David Court, a director at consulting firm McKinsey, has put it.)
What really defines the recent data revolution in peace-building, however, is that it is transcending size and resource limitations. It is finding its way to small organizations operating at local levels and using knowledge and subject experts to parse information from the ground. It is transforming the way peace-builders do business, delivering data-led programs and evidence-based decision-making not seen since the field’s inception in the latter half of the 20th century.
One of the most famous recent examples is the 2013 Kenyan presidential election.
In March 2013, the world was watching and waiting to see whether the vote would produce more of the violence that had left at least 1,300 people dead and 600,000 homeless during and after 2010 elections. In the intervening years, a web of NGOs worked to set up early-warning and early-response mechanisms to defuse tribal rivalries, party passions, and rumor-mongering. Many of the projects were technology-based initiatives trying to leverage data sources in new ways — including a collaborative effort spearheaded and facilitated by a Kenyan nonprofit called Ushahidi (“witness” in Swahili) that designs open-source data collection and mapping software. The Umati (meaning “crowd”) project used an Ushahidi program to monitor media reports, tweets, and blog posts to detect rising tensions, frustration, calls to violence, and hate speech — and then sorted and categorized it all on one central platform. The information fed into election-monitoring maps built by the Ushahidi team, while mobile-phone provider Safaricom donated 50 million text messages to a local peace-building organization, Sisi ni Amani (“We are Peace”), so that it could act on the information by sending texts — which had been used to incite and fuel violence during the 2007 elections — aimed at preventing violence and quelling rumors.
The first challenges came around 10 a.m. on the opening day of voting. “Rowdy youth overpowered police at a polling station in Dandora Phase 4,” one of the informal settlements in Nairobi that had been a site of violence in 2007, wrote Neelam Verjee, programs manager at Sisi ni Amani. The young men were blocking others from voting, and “the situation was tense.”
Sisi ni Amani sent a text blast to its subscribers: “When we maintain peace, we will have joy & be happy to spend time with friends & family but violence spoils all these good things. Tudumishe amani [“Maintain the peace”] Phase 4.” Meanwhile, security officers, who had been called separately, arrived at the scene and took control of the polling station. Voting resumed with little violence. According to interviews collected by Sisi ni Amani after the vote, the message “was sent at the right time” and “helped to calm down the situation.”
In many ways, Kenya’s experience is the story of peace-building today: Data is changing the way professionals in the field think about anticipating events, planning interventions, and assessing what worked and what didn’t. But it also underscores the possibility that we might be edging closer to a time when peace-builders at every level and in all sectors — international, state, and local, governmental and not — will have mechanisms both to know about brewing violence and to save lives by acting on that knowledge.
Three important trends underlie the optimism. The first is the sheer amount of data that we’re generating. In 2012, humans plugged into digital devices managed to generate more data in a single year than over the course of world history — and that rate more than doubles every year. As of 2012, 2.4 billion people — 34 percent of the world’s population — had a direct Internet connection. The growth is most stunning in regions like the Middle East and Africa where conflict abounds; access has grown 2,634 percent and 3,607 percent, respectively, in the last decade.
The growth of mobile-phone subscriptions, which allow their owners to be part of new data sources without a direct Internet connection, is also staggering. In 2013, there were almost as many cell-phone subscriptions in the world as there were people. In Africa, there were 63 subscriptions per 100 people, and there were 105 per 100 people in the Arab states.
The second trend has to do with our expanded capacity to collect and crunch data. Not only do we have more computing power enabling us to produce enormous new data sets — such as the Global Database of Events, Language, and Tone (GDELT) project, which tracks almost 300 million conflict-relevant events reported in the media between 1979 and today — but we are also developing more-sophisticated methodological approaches to using these data as raw material for conflict prediction. New machine-learning methodologies, which use algorithms to make predictions (like a spam filter, but much, much more advanced), can provide “substantial improvements in accuracy and performance” in anticipating violent outbreaks, according to Chris Perry, a data scientist at the International Peace Institute.
This brings us to the third trend: the nature of the data itself. When it comes to conflict prevention and peace-building, progress is not simply a question of “more” data, but also different data. For the first time, digital media — user-generated content and online social networks in particular — tell us not just what is going on, but also what people think about the things that are going on. Excitement in the peace-building field centers on the possibility that we can tap into data sets to understand, and preempt, the human sentiment that underlies violent conflict.
Realizing the full potential of these three trends means figuring out how to distinguish between the information, which abounds, and the insights, which are actionable. It is a distinction that is especially hard to make because it requires cross-discipline expertise that combines the wherewithal of data scientists with that of social scientists and the knowledge of technologists with the insights of conflict experts.