Rethinking Personal Data: A New Lens for Strengthening Trust


New report from the World Economic Forum: “As we look at the dynamic change shaping today’s data-driven world, one thing is becoming increasingly clear. We really do not know that much about it. Polarized along competing but fundamental principles, the global dialogue on personal data is inchoate and pulled in a variety of directions. It is complicated, conflated and often fueled by emotional reactions more than informed understandings.
The World Economic Forum’s global dialogue on personal data seeks to cut through this complexity. A multi-year initiative with global insights from the highest levels of leadership from industry, governments, civil society and academia, this work aims to articulate an ascendant vision of the value a balanced and human-centred personal data ecosystem can create.
Yet despite these aspirations, there is a crisis in trust. Concerns are voiced from a variety of viewpoints at a variety of scales. Industry, government and civil society are all uncertain on how to create a personal data ecosystem that is adaptive, reliable, trustworthy and fair.
The shared anxieties stem from the overwhelming challenge of transitioning into a hyperconnected world. The growth of data, the sophistication of ubiquitous computing and the borderless flow of data are all outstripping the ability to effectively govern on a global basis. We need the means to effectively uphold fundamental principles in ways fit for today’s world.
Yet despite the size and scope of the complexity, it cannot become a reason for inaction. The need for pragmatic and scalable approaches which strengthen transparency, accountability and the empowerment of individuals has become a global priority.
Tools are needed to answer fundamental questions: Who has the data? Where is the data? What is being done with it? All of these uncertainties need to be addressed for meaningful progress to occur.
Objectives need to be set. The benefits and harms for using personal data need be more precisely defined. The ambiguity surrounding privacy needs to be demystified and placed into a real-world context.
Individuals need to be meaningfully empowered. Better engagement over how data is used by third parties is one opportunity for strengthening trust. Supporting the ability for individuals to use personal data for their own purposes is another area for innovation and growth. But combined, the overall lack of engagement is undermining trust.
Collaboration is essential. The need for interdisciplinary collaboration between technologists, business leaders, social scientists, economists and policy-makers is vital. The complexities for delivering a sustainable and balanced personal data ecosystem require that these multifaceted perspectives are all taken into consideration.
With a new lens for using personal data, progress can occur.

Figure 1: A new lens for strengthening trust
 

Source: World Economic Forum

New crowdsourcing site like ‘Yelp’ for philanthropy


Vanessa Small in the Washington Post: “Billionaire investor Warren Buffett once said that there is no market test for philanthropy. Foundations with billions in assets often hand out giant grants to charity without critique. One watchdog group wants to change that.
The National Committee for Responsive Philanthropy has created a new Web site that posts public feedback about a foundation’s giving. Think Yelp for the philanthropy sector.
Along with public critiques, the new Web site, Philamplify.org, uploads a comprehensive assessment of a foundation conducted by researchers at the National Committee for Responsive Philanthropy.
The assessment includes a review of the foundation’s goals, strategies, partnerships with grantees, transparency, diversity in its board and how any investments support the mission.
The site also posts recommendations on what would make the foundation more effective in the community. The public can agree or disagree with each recommendation and then provide feedback about the grantmaker’s performance.
People who post to the site can remain anonymous.
NCRP officials hope the site will stir debate about the giving practices of foundations.
“Foundation leaders rarely get honest feedback because no one wants to get on the wrong side of a foundation,” said Lisa Ranghelli, a director at NCRP. “There’s so much we need to do as a society that we just want these philanthropic resources to be used as powerfully as possible and for everyone to feel like they have a voice in how philanthropy operates.”
With nonprofit rating sites such as Guidestar and Charity Navigator, Philamplify is just one more move to create more transparency in the nonprofit sector. But the site might be one of the first to force transparency and public commentary exclusively about the organizations that give grants.
Foundation leaders are open to the site, but say that some grantmakers already use various evaluation methods to improve their strategies.
Groups such as Grantmakers for Effective Organizations and the Center for Effective Philanthropy provide best practices for foundation giving.
The Council on Foundations, an Arlington-based membership organization of foundation groups, offers a list of tools and ideas for foundations to make their giving more effective.
“We will be paying close attention to Philamplify and new developments related to it as the project unfolds,” said Peter Panepento, senior vice president of community and knowledge at the Council on Foundations.
Currently there are three foundations up for review on the Web site: the William Penn Foundation in Philadelphia, which focuses on improving the Greater Philadelphia community; the Robert W. Woodruff Foundation in Atlanta, which gives grants in science and education; and the Lumina Foundation for Education in Indianapolis, which focuses on access to higher learning….”
Officials say Philamplify will focus on the top 100 largest foundations to start. Large foundations would include groups such as the Bill and Melinda Gates Foundation, the Robert Wood Johnson Foundation and Silicon Valley Community Foundation, and the foundations of companies such as Wal-Mart, Wells Fargo, Johnson & Johnson and GlaxoSmithKline.
Although there are concerns about the site’s ability to keep comments objective, grantees hope it will start a dialogue that has been absent in philanthropy.

Can Big Data Stop Wars Before They Happen?


Foreign Policy: “It has been almost two decades exactly since conflict prevention shot to the top of the peace-building agenda, as large-scale killings shifted from interstate wars to intrastate and intergroup conflicts. What could we have done to anticipate and prevent the 100 days of genocidal killing in Rwanda that began in April 1994 or the massacre of thousands of Bosnian Muslims at Srebrenica just over a year later? The international community recognized that conflict prevention could no longer be limited to diplomatic and military initiatives, but that it also requires earlier intervention to address the causes of violence between nonstate actors, including tribal, religious, economic, and resource-based tensions.
For years, even as it was pursued as doggedly as personnel and funding allowed, early intervention remained elusive, a kind of Holy Grail for peace-builders. This might finally be changing. The rise of data on social dynamics and what people think and feel — obtained through social media, SMS questionnaires, increasingly comprehensive satellite information, news-scraping apps, and more — has given the peace-building field hope of harnessing a new vision of the world. But to cash in on that hope, we first need to figure out how to understand all the numbers and charts and figures now available to us. Only then can we expect to predict and prevent events like the recent massacres in South Sudan or the ongoing violence in the Central African Republic.
A growing number of initiatives have tried to make it across the bridge between data and understanding. They’ve ranged from small nonprofit shops of a few people to massive government-funded institutions, and they’ve been moving forward in fits and starts. Few of these initiatives have been successful in documenting incidents of violence actually averted or stopped. Sometimes that’s simply because violence or absence of it isn’t verifiable. The growing literature on big data and conflict prevention today is replete with caveats about “overpromising and underdelivering” and the persistent gap between early warning and early action. In the case of the Conflict Early Warning and Response Mechanism (CEWARN) system in central Africa — one of the earlier and most prominent attempts at early intervention — it is widely accepted that the project largely failed to use the data it retrieved for effective conflict management. It relied heavily on technology to produce large databases, while lacking the personnel to effectively analyze them or take meaningful early action.
To be sure, disappointments are to be expected when breaking new ground. But they don’t have to continue forever. This pioneering work demands not just data and technology expertise. Also critical is cross-discipline collaboration between the data experts and the conflict experts, who know intimately the social, political, and geographic terrain of different locations. What was once a clash of cultures over the value and meaning of metrics when it comes to complex human dynamics needs to morph into collaboration. This is still pretty rare, but if the past decade’s innovations are any prologue, we are hopefully headed in the right direction.
* * *
Over the last three years, the U.S. Defense Department, the United Nations, and the CIA have all launched programs to parse the masses of public data now available, scraping and analyzing details from social media, blogs, market data, and myriad other sources to achieve variations of the same goal: anticipating when and where conflict might arise. The Defense Department’s Information Volume and Velocity program is designed to use “pattern recognition to detect trends in a sea of unstructured data” that would point to growing instability. The U.N.’s Global Pulse initiative’s stated goal is to track “human well-being and emerging vulnerabilities in real-time, in order to better protect populations from shocks.” The Open Source Indicators program at the CIA’s Intelligence Advanced Research Projects Activity aims to anticipate “political crises, disease outbreaks, economic instability, resource shortages, and natural disasters.” Each looks to the growing stream of public data to detect significant population-level changes.
Large institutions with deep pockets have always been at the forefront of efforts in the international security field to design systems for improving data-driven decision-making. They’ve followed the lead of large private-sector organizations where data and analytics rose to the top of the corporate agenda. (In that sector, the data revolution is promising “to transform the way many companies do business, delivering performance improvements not seen since the redesign of core processes in the 1990s,” as David Court, a director at consulting firm McKinsey, has put it.)
What really defines the recent data revolution in peace-building, however, is that it is transcending size and resource limitations. It is finding its way to small organizations operating at local levels and using knowledge and subject experts to parse information from the ground. It is transforming the way peace-builders do business, delivering data-led programs and evidence-based decision-making not seen since the field’s inception in the latter half of the 20th century.
One of the most famous recent examples is the 2013 Kenyan presidential election.
In March 2013, the world was watching and waiting to see whether the vote would produce more of the violence that had left at least 1,300 people dead and 600,000 homeless during and after 2010 elections. In the intervening years, a web of NGOs worked to set up early-warning and early-response mechanisms to defuse tribal rivalries, party passions, and rumor-mongering. Many of the projects were technology-based initiatives trying to leverage data sources in new ways — including a collaborative effort spearheaded and facilitated by a Kenyan nonprofit called Ushahidi (“witness” in Swahili) that designs open-source data collection and mapping software. The Umati (meaning “crowd”) project used an Ushahidi program to monitor media reports, tweets, and blog posts to detect rising tensions, frustration, calls to violence, and hate speech — and then sorted and categorized it all on one central platform. The information fed into election-monitoring maps built by the Ushahidi team, while mobile-phone provider Safaricom donated 50 million text messages to a local peace-building organization, Sisi ni Amani (“We are Peace”), so that it could act on the information by sending texts — which had been used to incite and fuel violence during the 2007 elections — aimed at preventing violence and quelling rumors.
The first challenges came around 10 a.m. on the opening day of voting. “Rowdy youth overpowered police at a polling station in Dandora Phase 4,” one of the informal settlements in Nairobi that had been a site of violence in 2007, wrote Neelam Verjee, programs manager at Sisi ni Amani. The young men were blocking others from voting, and “the situation was tense.”
Sisi ni Amani sent a text blast to its subscribers: “When we maintain peace, we will have joy & be happy to spend time with friends & family but violence spoils all these good things. Tudumishe amani [“Maintain the peace”] Phase 4.” Meanwhile, security officers, who had been called separately, arrived at the scene and took control of the polling station. Voting resumed with little violence. According to interviews collected by Sisi ni Amani after the vote, the message “was sent at the right time” and “helped to calm down the situation.”
In many ways, Kenya’s experience is the story of peace-building today: Data is changing the way professionals in the field think about anticipating events, planning interventions, and assessing what worked and what didn’t. But it also underscores the possibility that we might be edging closer to a time when peace-builders at every level and in all sectors — international, state, and local, governmental and not — will have mechanisms both to know about brewing violence and to save lives by acting on that knowledge.
Three important trends underlie the optimism. The first is the sheer amount of data that we’re generating. In 2012, humans plugged into digital devices managed to generate more data in a single year than over the course of world history — and that rate more than doubles every year. As of 2012, 2.4 billion people — 34 percent of the world’s population — had a direct Internet connection. The growth is most stunning in regions like the Middle East and Africa where conflict abounds; access has grown 2,634 percent and 3,607 percent, respectively, in the last decade.
The growth of mobile-phone subscriptions, which allow their owners to be part of new data sources without a direct Internet connection, is also staggering. In 2013, there were almost as many cell-phone subscriptions in the world as there were people. In Africa, there were 63 subscriptions per 100 people, and there were 105 per 100 people in the Arab states.
The second trend has to do with our expanded capacity to collect and crunch data. Not only do we have more computing power enabling us to produce enormous new data sets — such as the Global Database of Events, Language, and Tone (GDELT) project, which tracks almost 300 million conflict-relevant events reported in the media between 1979 and today — but we are also developing more-sophisticated methodological approaches to using these data as raw material for conflict prediction. New machine-learning methodologies, which use algorithms to make predictions (like a spam filter, but much, much more advanced), can provide “substantial improvements in accuracy and performance” in anticipating violent outbreaks, according to Chris Perry, a data scientist at the International Peace Institute.
This brings us to the third trend: the nature of the data itself. When it comes to conflict prevention and peace-building, progress is not simply a question of “more” data, but also different data. For the first time, digital media — user-generated content and online social networks in particular — tell us not just what is going on, but also what people think about the things that are going on. Excitement in the peace-building field centers on the possibility that we can tap into data sets to understand, and preempt, the human sentiment that underlies violent conflict.
Realizing the full potential of these three trends means figuring out how to distinguish between the information, which abounds, and the insights, which are actionable. It is a distinction that is especially hard to make because it requires cross-discipline expertise that combines the wherewithal of data scientists with that of social scientists and the knowledge of technologists with the insights of conflict experts.

The false promise of the digital humanities


Adam Kirsch in the New Republic: “The humanities are in crisis again, or still. But there is one big exception: digital humanities, which is a growth industry. In 2009, the nascent field was the talk of the Modern Language Association (MLA) convention: “among all the contending subfields,” a reporter wrote about that year’s gathering, “the digital humanities seem like the first ‘next big thing’ in a long time.” Even earlier, the National Endowment for the Humanities created its Office of Digital Humanities to help fund projects. And digital humanities continues to go from strength to strength, thanks in part to the Mellon Foundation, which has seeded programs at a number of universities with large grantsmost recently, $1 million to the University of Rochester to create a graduate fellowship.

Despite all this enthusiasm, the question of what the digital humanities is has yet to be given a satisfactory answer. Indeed, no one asks it more often than the digital humanists themselves. The recent proliferation of books on the subjectfrom sourcebooks and anthologies to critical manifestosis a sign of a field suffering an identity crisis, trying to determine what, if anything, unites the disparate activities carried on under its banner. “Nowadays,” writes Stephen Ramsay in Defining Digital Humanities, “the term can mean anything from media studies to electronic art, from data mining to edutech, from scholarly editing to anarchic blogging, while inviting code junkies, digital artists, standards wonks, transhumanists, game theorists, free culture advocates, archivists, librarians, and edupunks under its capacious canvas.”

Within this range of approaches, we can distinguish a minimalist and a maximalist understanding of digital humanities. On the one hand, it can be simply the application of computer technology to traditional scholarly functions, such as the editing of texts. An exemplary project of this kind is the Rossetti Archive created by Jerome McGann, an online repository of texts and images related to the career of Dante Gabriel Rossetti: this is essentially an open-ended, universally accessible scholarly edition. To others, however, digital humanities represents a paradigm shift in the way we think about culture itself, spurring a change not just in the medium of humanistic work but also in its very substance. At their most starry-eyed, some digital humanistssuch as the authors of the jargon-laden manifesto and handbook Digital_Humanitieswant to suggest that the addition of the high-powered adjective to the long-suffering noun signals nothing less than an epoch in human history: “We live in one of those rare moments of opportunity for the humanities, not unlike other great eras of cultural-historical transformation such as the shift from the scroll to the codex, the invention of movable type, the encounter with the New World, and the Industrial Revolution.”

The language here is the language of scholarship, but the spirit is the spirit of salesmanshipthe very same kind of hyperbolic, hard-sell approach we are so accustomed to hearing about the Internet, or  about Apple’s latest utterly revolutionary product. Fundamental to this kind of persuasion is the undertone of menace, the threat of historical illegitimacy and obsolescence. Here is the future, we are made to understand: we can either get on board or stand athwart it and get run over. The same kind of revolutionary rhetoric appears again and again in the new books on the digital humanities, from writers with very different degrees of scholarly commitment and intellectual sophistication.

In Uncharted, Erez Aiden and Jean-Baptiste Michel, the creators of the Google Ngram Vieweran online tool that allows you to map the frequency of words in all the printed matter digitized by Googletalk up the “big data revolution”: “Its consequences will transform how we look at ourselves…. Big data is going to change the humanities, transform the social sciences, and renegotiate the relationship between the world of commerce and the ivory tower.” These breathless prophecies are just hype. But at the other end of the spectrum, even McGann, one of the pioneers of what used to be called “humanities computing,” uses the high language of inevitability: “Here is surely a truth now universally acknowledged: that the whole of our cultural inheritance has to be recurated and reedited in digital forms and institutional structures.”

If ever there were a chance to see the ideological construction of reality at work, digital humanities is it. Right before our eyes, options are foreclosed and demands enforced; a future is constructed as though it were being discovered. By now we are used to this process, since over the last twenty years the proliferation of new technologies has totally discredited the idea of opting out of “the future.”…

A New Map Gives New Yorkers the Power to Report Traffic Hazards


Sarah Goodyear in the Atlantic/Cities: “Ask any New Yorker about unsafe conditions on the city’s streets. Go ahead, ask.
You might want to sit down. This is going to take a while.
New York City’s streets are some of the most heavily used public spaces in the nation. A lot of the time, the swirling mass of users share space remarkably well. Every second in New York, it sometimes seems, a thousand people just barely miss colliding, thanks to a finely honed sense of self-preservation and spatial awareness.
The dark side is that sometimes, they do collide. These famously chaotic and contested streets are often life-threatening. Drivers routinely drive well over the 30 mph speed limit, run red lights, and fail to yield to pedestrians in crosswalks.  Pedestrians step out into traffic, sometimes without looking at what’s coming their way. Bicyclists ride the wrong way up one-way streets.
In recent years, the city has begun to address the problem, mainly through design solutions like better bike infrastructure, pedestrian refuges, and crosswalk countdown clocks. Still, last year, 286 New Yorkers died in traffic crashes.
Mayor Bill de Blasio vowed almost as soon as he was sworn into office in January to pursue an initiative called Vision Zero, which aims to eliminate traffic fatalities through a combination of design, enforcement, and education.
A new tool in the Vision Zero effort was unveiled earlier this week: a map of the city on which people can log their observations and complaints about chronically unsafe conditions. The map offers a menu of icons including red-light running, double-parking, failure to yield, and speeding, and allows users to plot them on a map of the city’s streets. Sites where pedestrian fatalities have occurred since 2009 are marked, and the most dangerous streets in each borough for people on foot are colored red.

The map, a joint project of DOT, the NYPD, and the Taxi and Limousine Commission, has only been live for a couple of days. Already, it is speckled with dozens of multicolored dots indicating problem areas. (Full disclosure: The map was designed by OpenPlans, a nonprofit affiliated with Streetsblog, where I worked several years ago.)…”

Is Participatory Budgeting Real Democracy?


Anna Clark in NextCity: “Drawing from a practice pioneered 25 years ago in Porto Alegre, Brazil and imported to North America via progressive leaders in Toronto and Quebec, participatory budgeting cracks open the closed-door process of fiscal decision-making in cities, letting citizens vote on exactly how government money is spent in their community. It’s an auspicious departure from traditional ways of allocating tax dollars, let alone in Chicago, which has long been known for deeply entrenched machine politics. As Alderman Joe Moore puts it, in Chicago, “so many decisions are made from the top down.”
Participatory budgeting works pretty simply in the 49th Ward. Instead of Moore deciding how to spend $1.3 million in “menu money” that is allotted annually to each of Chicago’s 50 council members for capital improvements, the councilman opens up a public process to determine how to spend $1 million of the allotment. The remaining $300,000 is socked away in the bank for emergencies and cost overruns.
And the unusual vote on $1 million in menu money is open to a wider swath of the community than your standard Election Day: you don’t have to be a citizen to cast a ballot, and the voting age is sixteen.
Thanks to the process, Rogers Park can now boast of a new community garden, dozens of underpass murals, heating shelters at three transit stations, hundreds of tree plantings, an outdoor shower at Loyola Park, a $110,000 dog park, and eye-catching “You Are Here” neighborhood information boards at transit station entrances.

Another prominent supporter of participatory budgeting? The White House. In December—about eight months after Joe Moore met with President Barack Obama about bringing participatory budgeting to the federal level—PB became an option for determining how to spend community development block-grant money from the Department of Housing and Urban Development. The Obama administration also declared that, in a yet-to-be-detailed partnership, it will help create tools that can be used for participatory budgeting on a local level.
All this activity has so far added up to $45 million in tax dollars allocated to 203 voter-approved projects across the country. Some 46,000 people and 500 organizations nationwide have been part of the decision-making, according to the nonprofit Participatory Budgeting Project.
….
But to fulfill this vision, the process needs resources behind it—enough funds for projects to demonstrate a visible community benefit, and ample capacity from the facilitators of the process (whether it’s district officials or city hall) to truly reach out to the community. Without intention and capacity, PB risks duplicating the process of elections for ordinary representative democracy, where white middle- and upper-class voters are far more likely to vote and therefore enjoy an outsized influence on their neighborhood.

Participatory budgeting works differently for every city. In Porto Alegre, Brazil, where the process was created a generation ago by The Worker’s Party to give disadvantaged people a stronger voice in government, as many as 50,000 people vote on how to spend public money each year. More than $700 million has been funneled through the process since its inception. Vallejo, Calif., embraced participatory budgeting in 2012 after emerging from bankruptcy as part of its citywide reinvention. In its first PB vote in May 2013, 3,917 residents voted over the course of a week at 13 polling locations. That translated into four percent of the city’s eligible voters—a tiny number, but a much higher percentage than previous PB processes in Chicago and New York.
But the 5th Ward in Hyde Park, a South Side neighborhood that’s home to the University of Chicago, dropped PB in December, citing low turnout in neighborhood assemblies and residents who felt the process was too much work to be worthwhile. “They said it was very time consuming, a lot of meetings, and that they thought the neighborhood groups that they had were active enough to do it without having all of the expenses that were associated with it,” Alderman Leslie Hairston told the Hyde Park Herald. In 2013, its first year with participatory budgeting, the 5th Ward held a PB vote that saw only 100 ballots cast.
Josh Lerner of the Participatory Budgeting Project says low turnout is a problem that can be solved through outreach and promotion. “It is challenging to do this without capacity,” he said. Internationally, according to Lerner, PB is part of a city administration, with a whole office coordinating the process. Without the backing from City Hall in Porto Alegre, participatory budgeting would have a hard time attracting the tens of thousands who now count themselves as part of the process. And even with the support from City Hall, the 50,000 participants represent less than one percent of the city’s population of 1.4 million.

So what’s next for participatory budgeting in Rogers Park and beyond?
Well, first off, Rahm Emanuel’s new Manager of Participatory Budgeting will be responsible for supporting council districts if and when they opt to go participatory. There won’t be a requirement to do so, but if a district wishes to follow the 49th, they will have high-level backup from City Hall.
But this new manager—as well as Chicago’s aldermen and engaged citizens—must understand that there is no one-size-fits-all formula for participatory budgeting. The process must be adapted to the unique needs and culture of each district if it is to resonate with locals. And timing is key for rolling out the process.
While still in the hazy early days, federal support through the new White House initiative may also prove crucial in streamlining the participatory budgeting process, easing the burden on local leaders and citizens, and ultimately generating better participation—and, therefore, better on-the-ground results in communities around the country.
One of the key lessons of participatory budgeting—as with democracy more broadly—is that efficiency is not the highest value in the public sphere. It would be much easier and more cost-effective for aldermen to return to the old days and simply check off the boxes for where he or she thinks menu money should be spent. “We could sign off on menu money in a couple hours, a couple days,” Vandercook said. By choosing the participatory path, aldermen effectively create more work for themselves. They risk low rates of participation and the possibility that winning projects may not be the most worthy. Scalability, too, is a problem — the larger the community served by the process, the more difficult it is to ensure that both the process and the resulting projects reflect the needs of the entire community.
Nonetheless, participatory budgeting serves a harder-to-measure purpose that may well be, in the final accounting, more important. It is a profound civic education for citizens, who dig into both the limits and possibilities of public money. They experience what their elected leaders must navigate every day. But it’s also a civic education for council members and city staff who may find that they are engaging with those they represent more than they ever had before, learning about what they value most. Owen Burgh, chief of staff for Alderman Joe Arena in Chicago’s 45th Ward, told the Participatory Budgeting Project, “I was really surprised by the amazing knowledge base we have among our volunteers. So many of our volunteers came to the process with a background where they understood some principles of traffic management, community development and urban planning. It was very refreshing. Usually, in an alderman’s office, people contact us to fix an isolated problem. Through this process, we discussed not just what needed to be fixed but what we wanted our community to be.”
The participatory budgeting process expands the scope and depth of civic spaces in the community, where elected leaders work with—not for—residents. Even for those who do not show up to vote, there is an empowerment that comes simply in knowing that they could; the sincere invitation to participate matters, whether or not it is accepted…”

The Transformative Impact of Data and Communication on Governance


Steven Livingston at Brookings: “How do digital technologies affect governance in areas of limited statehood – places and circumstances characterized by the absence of state provisioning of public goods and the enforcement of binding rules with a monopoly of legitimate force?  In the first post in this series I introduced the limited statehood concept and then described the tremendous growth in mobile telephony, GIS, and other technologies in the developing world.   In the second post I offered examples of the use of ICT in initiatives intended to fill at least some of the governance vacuum created by limited statehood.  With mobile phones, for example, farmers are informed of market conditions, have access to liquidity through M-Pesa and similar mobile money platforms….
This brings to mind another type of ICT governance initiative.  Rather than fill in for or even displace the state some ICT initiatives can strengthen governance capacity.  Digital government – the use of digital technology by the state itself — is one important possibility.  Other initiatives strengthen the state by exerting pressure. Countries with weak governance sometimes take the form of extractive states or those, which cater to the needs of an elite, leaving the majority of the population in poverty and without basic public services. This is what Daron Acemoglu and James A. Robinson call extractive political and economic institutions.  Inclusive states, on the other hand, are pluralistic, bound by the rule of law, respectful of property rights, and, in general, accountable.  Accountability mechanisms such as a free press and competitive multiparty elections are instrumental to discourage extractive institutions.  What ICT-based initiatives might lend a hand in strengthening accountability? We can point to three examples.

Example One: Using ICT to Protect Human Rights

Nonstate actors now use commercial, high-resolution remote sensing satellites to monitor weapons programs and human rights violations.  Amnesty International’s Remote Sensing for Human Rights offers one example, and Satellite Sentinel offers another.  Both use imagery from DigitalGlobe, an American remote sensing and geospatial content company.   Other organizations have used commercially available remote sensing imagery to monitor weapons proliferation.  The Institute for Science and International Security, a Washington-based NGO, revealed the Iranian nuclear weapons program in 2003 using commercial satellite imagery…

Example Two: Crowdsourcing Election Observation

Others have used mobile phones and GIS to crowdsource election observation.  For the 2011 elections in Nigeria, The Community Life Project, a civil society organization, created ReclaimNaija, an elections process monitoring system that relied on GIS and amateur observers with mobile phones to monitor the elections.  Each of the red dots represents an aggregation of geo-located incidents reported to the ReclaimNaija platform.  In a live map, clicking on a dot disaggregates the reports, eventually taking the reader to individual reports.  Rigorous statistical analysis of ReclaimNaija results and the elections suggest it contributed to the effectiveness of the election process.

ReclaimNaija: Election Incident Reporting System Map

ReclaimNaija: Election Incident Reporting System Map

Example Three: Using Genetic Analysis to Identify War Crimes

In recent years, more powerful computers have led to major breakthroughs in biomedical science.  The reduction in cost of analyzing the human genome has actually outpaced Moore’s Law.  This has opened up new possibilities for the use of genetic analysis in forensic anthropology.   In Guatemala, the Balkans, Argentina, Peru and in several other places where mass executions and genocides took place, forensic anthropologists are using genetic analysis to find evidence that is used to hold the killers – often state actors – accountable…”

How Civil Society Organizations Close the Gap between Transparency and Accountability


In a research note in the current issue of Governance, Albert Van Zyl poses “the most critical question for activists and scholars of accountability: How and when does transparency lead to greater accountability?”  Van Zyl’s note looks particularly at the role of civil society organizations (CSOs) in demanding and using government budget information, drawing on case studies of CSO activity in eleven countries in Africa, Latin America and South Asia.  Accountability is achieved, Van Zyl suggests, when CSOs are active and closely engaged with legislators, auditors, and other formal oversight institutions.  But research is still needed on the kinds of engagement that are most likely to enhance accountability.  Read the research note.

Public Procurement as a Means to Stimulate Innovation for a Better World: A Matter of Knowledge Management


Paper by Max Rolfstam: “Public procurement is the central sourcing mechanism evoked to directly secure the delivery of public services. It may however also be used to achieve certain social outcomes and secondary effects. This paper attempts to contribute with knowledge regarding a particular secondary effect, the role of public procurement as a means to stimulate innovation. The paper discusses public procurement of innovation as a knowledge policy in the learning economy, and scrutinizes it from a knowledge management perspective, eventually to connect aspects of learning to institutional levels.”

Rethinking Institutions and Organizations


Essay by Royston Greenwood, C.R. Hiningsand Dave Whetten in the Journal of Management Studies: “In this essay we argue that institutional scholarship has become overly concerned with explaining institutions and institutional processes, notably at the level of the organization field, rather than with using them to explain and understand organizations. Especially missing is an attempt to gain a coherent, holistic account of how organizations are structured and managed. We also argue that when institutional theory does give attention to organizations it inappropriately treats them as though they are the same, or at least as though any differences are irrelevant for purposes of theory. We propose a return to the study of organizations with an emphasis upon comparative analysis, and suggest the institutional logics perspective as an appropriate means for doing so.”