Can Big Data Stop Wars Before They Happen?


Foreign Policy: “It has been almost two decades exactly since conflict prevention shot to the top of the peace-building agenda, as large-scale killings shifted from interstate wars to intrastate and intergroup conflicts. What could we have done to anticipate and prevent the 100 days of genocidal killing in Rwanda that began in April 1994 or the massacre of thousands of Bosnian Muslims at Srebrenica just over a year later? The international community recognized that conflict prevention could no longer be limited to diplomatic and military initiatives, but that it also requires earlier intervention to address the causes of violence between nonstate actors, including tribal, religious, economic, and resource-based tensions.
For years, even as it was pursued as doggedly as personnel and funding allowed, early intervention remained elusive, a kind of Holy Grail for peace-builders. This might finally be changing. The rise of data on social dynamics and what people think and feel — obtained through social media, SMS questionnaires, increasingly comprehensive satellite information, news-scraping apps, and more — has given the peace-building field hope of harnessing a new vision of the world. But to cash in on that hope, we first need to figure out how to understand all the numbers and charts and figures now available to us. Only then can we expect to predict and prevent events like the recent massacres in South Sudan or the ongoing violence in the Central African Republic.
A growing number of initiatives have tried to make it across the bridge between data and understanding. They’ve ranged from small nonprofit shops of a few people to massive government-funded institutions, and they’ve been moving forward in fits and starts. Few of these initiatives have been successful in documenting incidents of violence actually averted or stopped. Sometimes that’s simply because violence or absence of it isn’t verifiable. The growing literature on big data and conflict prevention today is replete with caveats about “overpromising and underdelivering” and the persistent gap between early warning and early action. In the case of the Conflict Early Warning and Response Mechanism (CEWARN) system in central Africa — one of the earlier and most prominent attempts at early intervention — it is widely accepted that the project largely failed to use the data it retrieved for effective conflict management. It relied heavily on technology to produce large databases, while lacking the personnel to effectively analyze them or take meaningful early action.
To be sure, disappointments are to be expected when breaking new ground. But they don’t have to continue forever. This pioneering work demands not just data and technology expertise. Also critical is cross-discipline collaboration between the data experts and the conflict experts, who know intimately the social, political, and geographic terrain of different locations. What was once a clash of cultures over the value and meaning of metrics when it comes to complex human dynamics needs to morph into collaboration. This is still pretty rare, but if the past decade’s innovations are any prologue, we are hopefully headed in the right direction.
* * *
Over the last three years, the U.S. Defense Department, the United Nations, and the CIA have all launched programs to parse the masses of public data now available, scraping and analyzing details from social media, blogs, market data, and myriad other sources to achieve variations of the same goal: anticipating when and where conflict might arise. The Defense Department’s Information Volume and Velocity program is designed to use “pattern recognition to detect trends in a sea of unstructured data” that would point to growing instability. The U.N.’s Global Pulse initiative’s stated goal is to track “human well-being and emerging vulnerabilities in real-time, in order to better protect populations from shocks.” The Open Source Indicators program at the CIA’s Intelligence Advanced Research Projects Activity aims to anticipate “political crises, disease outbreaks, economic instability, resource shortages, and natural disasters.” Each looks to the growing stream of public data to detect significant population-level changes.
Large institutions with deep pockets have always been at the forefront of efforts in the international security field to design systems for improving data-driven decision-making. They’ve followed the lead of large private-sector organizations where data and analytics rose to the top of the corporate agenda. (In that sector, the data revolution is promising “to transform the way many companies do business, delivering performance improvements not seen since the redesign of core processes in the 1990s,” as David Court, a director at consulting firm McKinsey, has put it.)
What really defines the recent data revolution in peace-building, however, is that it is transcending size and resource limitations. It is finding its way to small organizations operating at local levels and using knowledge and subject experts to parse information from the ground. It is transforming the way peace-builders do business, delivering data-led programs and evidence-based decision-making not seen since the field’s inception in the latter half of the 20th century.
One of the most famous recent examples is the 2013 Kenyan presidential election.
In March 2013, the world was watching and waiting to see whether the vote would produce more of the violence that had left at least 1,300 people dead and 600,000 homeless during and after 2010 elections. In the intervening years, a web of NGOs worked to set up early-warning and early-response mechanisms to defuse tribal rivalries, party passions, and rumor-mongering. Many of the projects were technology-based initiatives trying to leverage data sources in new ways — including a collaborative effort spearheaded and facilitated by a Kenyan nonprofit called Ushahidi (“witness” in Swahili) that designs open-source data collection and mapping software. The Umati (meaning “crowd”) project used an Ushahidi program to monitor media reports, tweets, and blog posts to detect rising tensions, frustration, calls to violence, and hate speech — and then sorted and categorized it all on one central platform. The information fed into election-monitoring maps built by the Ushahidi team, while mobile-phone provider Safaricom donated 50 million text messages to a local peace-building organization, Sisi ni Amani (“We are Peace”), so that it could act on the information by sending texts — which had been used to incite and fuel violence during the 2007 elections — aimed at preventing violence and quelling rumors.
The first challenges came around 10 a.m. on the opening day of voting. “Rowdy youth overpowered police at a polling station in Dandora Phase 4,” one of the informal settlements in Nairobi that had been a site of violence in 2007, wrote Neelam Verjee, programs manager at Sisi ni Amani. The young men were blocking others from voting, and “the situation was tense.”
Sisi ni Amani sent a text blast to its subscribers: “When we maintain peace, we will have joy & be happy to spend time with friends & family but violence spoils all these good things. Tudumishe amani [“Maintain the peace”] Phase 4.” Meanwhile, security officers, who had been called separately, arrived at the scene and took control of the polling station. Voting resumed with little violence. According to interviews collected by Sisi ni Amani after the vote, the message “was sent at the right time” and “helped to calm down the situation.”
In many ways, Kenya’s experience is the story of peace-building today: Data is changing the way professionals in the field think about anticipating events, planning interventions, and assessing what worked and what didn’t. But it also underscores the possibility that we might be edging closer to a time when peace-builders at every level and in all sectors — international, state, and local, governmental and not — will have mechanisms both to know about brewing violence and to save lives by acting on that knowledge.
Three important trends underlie the optimism. The first is the sheer amount of data that we’re generating. In 2012, humans plugged into digital devices managed to generate more data in a single year than over the course of world history — and that rate more than doubles every year. As of 2012, 2.4 billion people — 34 percent of the world’s population — had a direct Internet connection. The growth is most stunning in regions like the Middle East and Africa where conflict abounds; access has grown 2,634 percent and 3,607 percent, respectively, in the last decade.
The growth of mobile-phone subscriptions, which allow their owners to be part of new data sources without a direct Internet connection, is also staggering. In 2013, there were almost as many cell-phone subscriptions in the world as there were people. In Africa, there were 63 subscriptions per 100 people, and there were 105 per 100 people in the Arab states.
The second trend has to do with our expanded capacity to collect and crunch data. Not only do we have more computing power enabling us to produce enormous new data sets — such as the Global Database of Events, Language, and Tone (GDELT) project, which tracks almost 300 million conflict-relevant events reported in the media between 1979 and today — but we are also developing more-sophisticated methodological approaches to using these data as raw material for conflict prediction. New machine-learning methodologies, which use algorithms to make predictions (like a spam filter, but much, much more advanced), can provide “substantial improvements in accuracy and performance” in anticipating violent outbreaks, according to Chris Perry, a data scientist at the International Peace Institute.
This brings us to the third trend: the nature of the data itself. When it comes to conflict prevention and peace-building, progress is not simply a question of “more” data, but also different data. For the first time, digital media — user-generated content and online social networks in particular — tell us not just what is going on, but also what people think about the things that are going on. Excitement in the peace-building field centers on the possibility that we can tap into data sets to understand, and preempt, the human sentiment that underlies violent conflict.
Realizing the full potential of these three trends means figuring out how to distinguish between the information, which abounds, and the insights, which are actionable. It is a distinction that is especially hard to make because it requires cross-discipline expertise that combines the wherewithal of data scientists with that of social scientists and the knowledge of technologists with the insights of conflict experts.

How Britain’s Getting Public Policy Down to a Science


in Governing: “Britain has a bold yet simple plan to do something few U.S. governments do: test the effectiveness of multiple policies before rolling them out. But are American lawmakers willing to listen to facts more than money or politics?

In medicine they do clinical trials to determine whether a new drug works. In business they use focus groups to help with product development. In Hollywood they field test various endings for movies in order to pick the one audiences like best. In the world of public policy? Well, to hear members of the United Kingdom’s Behavioural Insights Team (BIT) characterize it, those making laws and policies in the public sector tend to operate on some well-meaning mix of whim, hunch and dice roll, which all too often leads to expensive and ineffective (if not downright harmful) policy decisions.

….One of the prime BIT examples for why facts and not intuition ought to drive policy hails from the U.S. The much-vaunted “Scared Straight” program that swept the U.S. in the 1990s involved shepherding at-risk youth into maximum security prisons. There, they would be confronted by inmates who, presumably, would do the scaring while the visiting juveniles would do the straightening out. Scared Straight seemed like a good idea — let at-risk youth see up close and personal what was in store for them if they continued their wayward ways. Initially the results reported seemed not just good, but great. Programs were reporting “success rates” as high as 94 percent, which inspired other countries, including the U.K., to adopt Scared Straight-like programs.

The problem was that none of the program evaluations included a control group — a group of kids in similar circumstances with similar backgrounds who didn’t go through a Scared Straight program. There was no way to see how they would fare absent the experience. Eventually, a more scientific analysis of seven U.S. Scared Straight programs was conducted. Half of the at-risk youth in the study were left to their own devices and half were put through the program. This led to an alarming discovery: Kids who went through Scared Straight were more likely to offend than kids who skipped it — or, more precisely, who were spared it. The BIT concluded that “the costs associated with the programme (largely related to the increase in reoffending rates) were over 30 times higher than the benefits, meaning that ‘Scared Straight’ programmes cost the taxpayer a significant amount of money and actively increased crime.”

It was witnessing such random acts of policymaking that in 2010 inspired a small group of political and social scientists to set up the Behavioural Insights Team. Originally a small “skunk works” tucked away in the U.K. Treasury Department, the team gained traction under Prime Minister David Cameron, who took office evincing a keen interest in both “nonregulatory solutions to policy problems” and in spending public money efficiently, Service says. By way of example, he points to a business support program in the U.K. that would give small and medium-sized businesses up to £3,000 to subsidize advice from professionals. “But there was no proven link between receiving that money and improving business. We thought, ‘Wouldn’t it be better if you could first test the efficacy of some million-pound program or other, rather than just roll it out?’”

The BIT was set up as something of a policy research lab that would scientifically test multiple approaches to a public policy problem on a limited, controlled basis through “randomized controlled trials.” That is, it would look at multiple ways to skin the cat before writing the final cat-skinning manual. By comparing the results of various approaches — efforts to boost tax compliance, say, or to move people from welfare to work — policymakers could use the results of the trials to actually hone in on the most effective practices before full-scale rollout.

The various program and policy options that are field tested by the BIT aren’t pie-in-the-sky surmises, which is where the “behavioural” piece of the equation comes in. Before settling on what options to test, the BIT takes into account basic human behavior — what motivates us and what turns us off — and then develops several approaches to a policy problem based on actual social science and psychology.

The approach seems to work. Take, for example, the issue of recruiting organ donors. It can be a touchy topic, suggesting one’s own mortality while also conjuring up unsettling images of getting carved up and parceled out by surgeons. It’s no wonder, then, that while nine out of 10 people in England profess to support organ donations, fewer than one in three are officially registered as donors. To increase the U.K.’s ratio, the BIT decided to play around with the standard recruitment message posted on a high-traffic gov.uk website that encourages people to sign up with the national Organ Donor Register (see “‘Please Help Others,’” page 18). Seven different messages that varied in approach and tone were tested, and at the end of the trial, one message emerged clearly as the most effective — so effective, in fact, that the BIT concluded that “if the best-performing message were to be used over the whole year, it would lead to approximately 96,000 extra registrations completed.”

According to the BIT there are nine key steps to a defensible controlled randomized trial, the first and second — and the two most obvious — being that there must be at least two policy interventions to compare and that the outcome that the policies they’re meant to influence must be clear. But the “randomized” factor in the equation is critical, and it’s not necessarily easy to achieve.

In BIT-speak, “randomization units” can range from individuals (randomly chosen clients) entering the same welfare office but experiencing different interventions, to different groups of clientele or even different institutions like schools or congregate care facilities. The important point is to be sure that the groups or institutions chosen for comparison are operating in circumstances and with clientele similar enough so that researchers can confidently say that any differences in outcomes are due to different policy interventions and not other socioeconomic or cultural exigencies. There are also minimum sampling sizes that ensure legitimacy — essentially, the more the merrier.

As a matter of popular political culture, the BIT’s approach is known as “nudge theory,” a strand of behavioral economics based on the notion that the economic decisions that human beings make are just that — human — and that by tuning into what motivates and appeals to people we can much better understand why those economic decisions are made. In market economics, of course, nudge theory helps businesses tune into customer motivation. In public policy, nudge theory involves figuring out ways to motivate people to do what’s best for themselves, their families, their neighborhoods and society.

When the BIT started playing around with ways to improve tax compliance, for example, the group discovered a range of strategies to do that, from the very obvious approach — make compliance easy — to the more behaviorally complex. The idea was to key in on the sorts of messages to send to taxpayers that will resonate and improve voluntary compliance. The results can be impressive. “If you just tell taxpayers that the majority of folks in their area pay their taxes on time [versus sending out dunning letters],” says the BIT’s Service, “that adds 3 percent more people who pay, bringing in millions of pounds.” Another randomized controlled trial showed that in pestering citizens to pay various fines, personal text messages were more effective than letters.

There has been pushback on using randomized controlled trials to develop policy. Some see it as a nefarious attempt at mind control on the part of government. “Nudge” to some seems to mean “manipulate.” Service bridles at the criticism. “We’re sometimes referred to as ‘the Nudge Team,’ but we’re the ‘Behavioural Insights Team’ because we’re interested in human behavior, not mind control.”

The essence of the philosophy, Service adds, is “leading people to do the right thing.” For those interested in launching BIT-like efforts without engendering immediate ideological resistance, he suggests focusing first on “non-headline-grabbing” policy areas such as tax collection or organ donation that can be launched through administrative fiat.”

United States federal government use of crowdsourcing grows six-fold since 2011


at E Pluribus Unum: “Citizensourcing and open innovation can work in the public sector, just as crowdsourcing can in the private sector. Around the world, the use of prizes to spur innovation has been booming for years. The United States of America has been significantly scaling up its use of prizes and challenges to solving grand national challenges since January 2011, when, President Obama signed an updated version of the America COMPETES Act into law.
According to the third congressionally mandated report released by the Obama administration today (PDF/Text), the number of prizes and challenges conducted under the America COMPETES Act has increased by 50% since 2012, 85% since 2012, and nearly six-fold overall since 2011. 25 different federal agencies offered prizes under COMPETES in fiscal year 2013, with 87 prize competitions in total. The size of the prize purses has also grown as well, with 11 challenges over $100,000 in 2013. Nearly half of the prizes conducted in FY 2013 were focused on software, including applications, data visualization tools, and predictive algorithms. Challenge.gov, the award-winning online platform for crowdsourcing national challenges, now has tens of thousands of users who have participated in more than 300 public-sector prize competitions. Beyond the growth in prize numbers and amounts, Obama administration highlighted 4 trends in public-sector prize competitions:

  • New models for public engagement and community building during competitions
  • Growth software and information technology challenges, with nearly 50% of the total prizes in this category
  • More emphasis on sustainability and “creating a post-competition path to success”
  • Increased focus on identifying novel approaches to solving problems

The growth of open innovation in and by the public sector was directly enabled by Congress and the White House, working together for the common good. Congress reauthorized COMPETES in 2010 with an amendment to Section 105 of the act that added a Section 24 on “Prize Competitions,” providing all agencies with the authority to conduct prizes and challenges that only NASA and DARPA has previously enjoyed, and the White House Office of Science and Technology Policy (OSTP), which has been guiding its implementation and providing guidance on the use of challenges and prizes to promote open government.
“This progress is due to important steps that the Obama Administration has taken to make prizes a standard tool in every agency’s toolbox,” wrote Cristin Dorgelo, assistant director for grand challenges in OSTP, in a WhiteHouse.gov blog post on engaging citizen solvers with prizes:

In his September 2009 Strategy for American Innovation, President Obama called on all Federal agencies to increase their use of prizes to address some of our Nation’s most pressing challenges. Those efforts have expanded since the signing of the America COMPETES Reauthorization Act of 2010, which provided all agencies with expanded authority to pursue ambitious prizes with robust incentives.
To support these ongoing efforts, OSTP and the General Services Administration have trained over 1,200 agency staff through workshops, online resources, and an active community of practice. And NASA’s Center of Excellence for Collaborative Innovation (COECI) provides a full suite of prize implementation services, allowing agencies to experiment with these new methods before standing up their own capabilities.

Sun Microsystems co-founder Bill Joy famously once said that “No matter who you are, most of the smartest people work for someone else.” This rings true, in and outside of government. The idea of governments using prizes like this to inspire technological innovation, however, is not reliant on Web services and social media, born from the fertile mind of a Silicon Valley entrepreneur. As the introduction to the third White House prize report  notes:

“One of the most famous scientific achievements in nautical history was spurred by a grand challenge issued in the 18th Century. The issue of safe, long distance sea travel in the Age of Sail was of such great importance that the British government offered a cash award of £20,000 pounds to anyone who could invent a way of precisely determining a ship’s longitude. The Longitude Prize, enacted by the British Parliament in 1714, would be worth some £30 million pounds today, but even by that measure the value of the marine chronometer invented by British clockmaker John Harrison might be a deal.”

Centuries later, the Internet, World Wide Web, mobile devices and social media offer the best platforms in history for this kind of approach to solving grand challenges and catalyzing civic innovation, helping public officials and businesses find new ways to solve old problem. When a new idea, technology or methodology that challenges and improves upon existing processes and systems, it can improve the lives of citizens or the function of the society that they live within….”

Sharing in a Changing Climate


Helen Goulden in the Huffington Post: “Every month, a social research agency conducts a public opinion survey on 30,000 UK households. As part of this households are asked about what issues they think are the most important; things such as crime, unemployment, inequality, public health etc. Climate change has ranked so consistently low on these surveys that they don’t both asking any more.
On first glance, it would appear that most people don’t care about a changing climate.
Yet, that’s simply not true. Many people care deeply, but fleetingly – in the same way they may consider their own mortality before getting back to thinking about what to have for tea. And others care, but fail to change their behaviour in a way that’s proportionate to their concerns. Certainly that’s my unhappy stomping ground.
Besides what choices do we really have? Even the most progressive, large organisations have been glacial to move towards any form of real form of sustainability. For many years we have struggled with the Frankenstein-like task of stitching ‘sustainability’ onto existing business and economic models and the results, I think, speak for themselves.
That the Collaborative Economy presents us with an opportunity – in Napster-like ways – to disrupt and evolve toward something more sustainable is compelling idea. Looking out to a future filled with opportunities to reconfigure how we produce, consume and dispose of the things we want and need to live, work and play.
Whether the journey toward sustainability is short or long, it will be punctuated with a good degree of turbulence, disruption and some largely unpredictable events. How we deal with those events and what role communities, collaboration and technology play may set the framework and tone for how that future evolves. Crises and disruption to our entrenched living patterns present ripe opportunities for innovation and space for adopting new behaviours and practices.
No-one is immune from the impact of erratic and extreme weather events. And if we accept that these events are going to increase in frequency, we must draw the conclusion that emergency state and government resources may be drawn more thinly over time.
Across the world, there is a fairly well organised state and international infrastructure for dealing with emergencies , involving everyone from the Disaster Emergency Committee, the UN, central and local government and municipalities, not for profit organisations and of course, the military. There is a clear reason why we need this kind of state emergency response; I’m not suggesting that we don’t.
But through the rise of open data and mass participation in platforms that share location, identity and inventory, we are creating a new kind of mesh; a social and technological infrastructure that could considerably strengthen our ability to respond to unpredictable events.
In the last few years we have seen a sharp rise in the number of tools and crowdsourcing platforms and open source sensor networks that are focused on observing, predicting or responding to extreme events:
• Apps like Shake Alert, which emits a minute warning that an earthquake is coming
• Rio’s sensor network, which measures rainfall outside the city and can predict flooding
• Open Source sensor software Arduino which is being used to crowd-source weather and pollution data
• Propeller Health, which is using Asthma sensors on inhalers to crowd-source pollution hotspots
• Safecast, which was developed for crowdsourcing radiation levels in Japan
Increasingly we have the ability to deploy open source, distributed and networked sensors and devices for capturing and aggregating data that can help us manage our responses to extreme weather (and indeed, other kinds of) events.
Look at platforms like LocalMind and Foursquare. Today, I might be using them to find out whether there’s a free table at a bar or finding out what restaurant my friends are in. But these kind of social locative platforms present an infrastructure that could be life-saving in any kind of situation where you need to know where to go quickly to get out of trouble. We know that in the wake of disruptive events and disasters, like bombings, riots etc, people now intuitively and instinctively take to technology to find out what’s happening, where to go and how to co-ordinate response efforts.
During the 2013 Bart Strike in San Francisco, ventures like Liquid Space and SideCar enabled people to quickly find alternative places to work, or alternatives to public transport, to mitigate the inconvenience of the strike. The strike was a minor inconvenience compared to the impact of a hurricane and flood but nevertheless, in both those instances, ventures decided waive their fees; as did AirBnB when 1,400 New York AirBnB hosts opened their doors to people who had been left homeless through Hurricane Sandy in 2012.
The impulse to help is not new. The matching of people’s offers of help and resources to on-the-ground need, in real time, is.”

Sammies finalists are harnessing technology to help the public


Lisa Rein in the Washington Post: “One team of federal agents led Medicare investigations that resulted in more than 600 convictions in South Florida, recovering hundreds of millions of dollars. Another official boosted access to burial sites for veterans across the country. And one guided an initiative to provide safe drinking water to 5 million people in Uganda and Kenya. These are some of the 33 individuals and teams of federal employees nominated for the 13th annual Samuel J. Heyman Service to America Medals, among the highest honors in government. The 2014 finalists reflect the achievements of public servants in fields from housing to climate change, their work conducted in Washington and locations as far-flung as Antarctica and Alabama…
Many of them have excelled in harnessing new technology in ways that are pushing the limits of what government thought was possible even a few years ago. Michael Byrne of the Federal Communications Commission, for example, put detailed data about broadband availability in the hands of citizens and policymakers using interactive online maps and other visualizations. At the Environmental Protection Agency, Douglas James Norton made water quality data that had never been public available on the Web for citizens, scientists and state agencies.”

Out in the Open: An Open Source Website That Gives Voters a Platform to Influence Politicians


Klint Finley in Wired: “This is the decade of the protest. The Arab Spring. The Occupy Movement. And now the student demonstrations in Taiwan.
Argentine political scientist Pia Mancini says we’re caught in a “crisis of representation.” Most of these protests have popped up in countries that are at least nominally democratic, but so many people are still unhappy with their elected leaders. The problem, Mancini says, is that elected officials have drifted so far from the people they represent, that it’s too hard for the average person to be heard.
“If you want to participate in the political system as it is, it’s really costly,” she says. “You need to study politics in university, and become a party member and work your way up. But not every citizen can devote their lives to politics.”

Democracy OS is designed to address that problem by getting citizens directly involved in debating specific proposals when their representatives are actually voting on them.

That’s why Mancini started the Net Democracy foundation, a not-for-profit that explores ways of improving civic engagement through technology. The foundation’s first project is something called Democracy OS, an online platform for debating and voting on political issues, and it’s already finding a place in the world. The federal government in Mexico is using this open-source tool to gather feedback on a proposed public data policy, and in Tunisia, a non-government organization called iWatch has adopted it in an effort to give the people a stronger voice.
Mancini’s dissatisfaction with electoral politics stems from her experience working for the Argentine political party Unión Celeste y Blanco from 2010 until 2012. “I saw some practices that I thought were harmful to societies,” she says. Parties were too interested in the appearances of the candidates, and not interested enough in their ideas. Worse, citizens were only consulted for their opinions once every two to four years, meaning politicians could get away with quite a bit in the meantime.
Democracy OS is designed to address that problem by getting citizens directly involved in debating specific proposals when their representatives are actually voting on them. It operates on three levels: one for gathering information about political issues, one for public debate about those issues, and one for actually voting on specific proposals.
Various communities now use a tool called Madison to discuss policy documents, and many activists and community organizations have adopted Loomio to make decisions internally. But Democracy OS aims higher: to provide a common platform for any city, state, or government to actually put proposals to a vote. “We’re able to actually overthrow governments, but we’re not using technology to decide what to do next,” Mancini says. “So the risk is that we create power vacuums that get filled with groups that are already very well organized. So now we need to take it a bit further. We need to decide what democracy for the internet era looks like.”
Image: Courtesy of Net Democracy

Software Shop as Political Party

Today Net Democracy is more than just a software development shop. It’s also a local political party based in Beunos Aires. Two years ago, the foundation started pitching the first prototype of the software to existing political parties as a way for them to gather feedback from constituents, but it didn’t go over well. “They said: ‘Thank you, this is cool, but we’re not interested,’” Mancini remembers. “So we decided to start our own political party.”
The Net Democracy Party hasn’t won any seats yet, but it promises that if it does, it will use Democracy OS to enable any local registered voter to tell party representatives how to vote. Mancini says the party representatives will always vote the way constituents tell them to vote through the software.

‘We’re not saying everyone should vote on every issue all the time. What were saying is that issues should be open for everyone to participate.’

She also uses the term “net democracy” to refer to the type of democracy that the party advocates, a form of delegative democracy that attempts to strike a balance between representative democracy and direct democracy. “We’re not saying everyone should vote on every issue all the time,” Mancini explains. “What were saying is that issues should be open for everyone to participate.”
Individuals will also be able to delegate their votes to other people. “So, if you’re not comfortable voting on health issues, you can delegate to someone else to vote for you in that area,” she says. “That way people with a lot of experience in an issue, like a community leader who doesn’t have lobbyist access to the system, can build more political capital.”
She envisions a future where decisions are made on two levels. Decisions that involve specific knowledge — macroeconomics, tax reforms, judiciary regulations, penal code, etc. — or that affect human rights are delegated “upwards” to representatives. But then decisions related to local issues — transport, urban development, city codes, etc. — cab be delegated “downwards” to the citizens.

The Secret Ballot Conundrum

Ensuring the integrity of the votes gathered via Democracy OS will be a real challenge. The U.S. non-profit organization Black Box Voting has long criticized electronic voting schemes as inherently flawed. “Our criticism of internet voting is that it is not transparent and cannot be made publicly transparent,” says Black Box Voting founder Bev Harris. “With transparency for election integrity defined as public ability to see and authenticate four things: who can vote, who did vote, vote count, and chain of custody.”
In short, there’s no known way to do a secret ballot online because any system for verifying that the votes were counted properly will inevitably reveal who voted for what.
Democracy OS deals with that by simply doing away with secret ballots. For now, the Net Democracy party will have people sign-up for Democracy OS accounts in person with their government issued ID cards. “There is a lot to be said about how anonymity allows you to speak more freely,” Mancini says. “But in the end, we decided to prioritize the reliability, accountability and transparency of the system. We believe that by making our arguments and decisions public we are fostering a civic culture. We will be more responsible for what we say and do if it’s public.”
But making binding decisions based on these online discussions would be problematic, since they would skew not just towards those tech savvy enough to use the software, but also towards those willing to have their names attached to their votes publicly. Fortunately, the software isn’t yet being used to gather real votes, just to gather public feedback….”

The Universe Is Programmable. We Need an API for Everything


Keith Axline in Wired: “Think about it like this: In the Book of Genesis, God is the ultimate programmer, creating all of existence in a monster six-day hackathon.
Or, if you don’t like Biblical metaphors, you can think about it in simpler terms. Robert Moses was a programmer, shaping and re-shaping the layout of New York City for more than 50 years. Drug developers are programmers, twiddling enzymes to cure what ails us. Even pickup artists and conmen are programmers, running social scripts on people to elicit certain emotional results.

Keith Axline in Wired: “Everyone is becoming a programmer. The next step is to realize that everything is a program.

The point is that, much like the computer on your desk or the iPhone in your hand, the entire Universe is programmable. Just as you can build apps for your smartphones and new services for the internet, so can you shape and re-shape almost anything in this world, from landscapes and buildings to medicines and surgeries to, well, ideas — as long as you know the code.
That may sound like little more than an exercise in semantics. But it’s actually a meaningful shift in thinking. If we look at the Universe as programmable, we can start treating it like software. In short, we can improve almost everything we do with the same simple techniques that have remade the creation of software in recent years, things like APIs, open source code, and the massively popular code-sharing service GitHub.
The great thing about the modern software world is that you don’t have to build everything from scratch. Apple provides APIs, or application programming interfaces, that can help you build apps on their devices. And though Tim Cook and company only give you part of what you need, you can find all sorts of other helpful tools elsewhere, thanks to the open source software community.
The same is true if you’re building, say, an online social network. There are countless open source software tools you can use as the basic building blocks — many of them open sourced by Facebook. If you’re creating almost any piece of software, you can find tools and documentation that will help you fashion at least a small part of it. Chances are, someone has been there before, and they’ve left some instructions for you.
Now we need to discover and document the APIs for the Universe. We need a standard way of organizing our knowledge and sharing it with the world at large, a problem for which programmers already have good solutions. We need to give everyone a way of handling tasks the way we build software. Such a system, if it can ever exist, is still years away — decades at the very least — and the average Joe is hardly ready for it. But this is changing. Nowadays, programming skills and the DIY ethos are slowly spreading throughout the population. Everyone is becoming a programmer. The next step is to realize that everything is a program.

What Is an API?

The API may sound like just another arcane computer acronym. But it’s really one of the most profound metaphors of our time, an idea hiding beneath the surface of each piece of tech we use everyday, from iPhone apps to Facebook. To understand what APIs are and why they’re useful, let’s look at how programmers operate.
If I’m building a smartphone app, I’m gonna need — among so many other things — a way of validating a signup form on a webpage to make sure a user doesn’t, say, mistype their email address. That validation has nothing to do with the guts of my app, and it’s surprisingly complicated, so I don’t really want to build it from scratch. Apple doesn’t help me with that, so I start looking on the web for software frameworks, plugins, Software Developer Kits (SDKs) — anything that will help me build my signup tool.
Hopefully, I’ll find one. And if I do, chances are it will include some sort of documentation or “Readme file” explaining how this piece of code is supposed to be used so that I can tailor it to my app. This Readme file should contain installation instructions as well as the API for the code. Basically, an API lays out the code’s inputs and outputs. It shows what me what I have to send the code and what it will spit back out. It shows how I bolt it onto my signup form. So the name is actually quite explanatory: Application Programming Interface. An API is essentially an instruction manual for a piece of software.
Now, let’s combine this with the idea that everything is an application: molecules, galaxies, dogs, people, emotional states, abstract concepts like chaos. If you do something to any these things, they’ll respond in some way. Like software, they have inputs and outputs. What we need to do is discover and document their APIs.
We aren’t dealing with software code here. Inputs and outputs can themselves be anything. But we can closely document these inputs and their outputs — take what we know about how we interface with something and record it in a standard way that it can be used over and over again. We can create a Readme file for everything.
We can start by doing this in small, relatively easy ways. How about APIs for our cities? New Zealand just open sourced aerial images of about 95 percent of its land. We could write APIs for what we know about building in those areas, from properties of the soil to seasonal weather patterns to zoning laws. All this knowledge exists but it hasn’t been organized and packaged for use by anyone who is interested. And we could go still further — much further.
For example, between the science community, the medical industry and the billions of human experiences, we could probably have a pretty extensive API mapped out of the human stomach — one that I’d love to access when I’m up at 3am with abdominal pains. Maybe my microbiome is out of whack and there’s something I have on-hand that I could ingest to make it better. Or what if we cracked the API for the signals between our eyes and our brain? We wouldn’t need to worry about looking like Glassholes to get access to always-on augmented reality. We could just get an implant. Yes, these APIs will be slightly different for everyone, but that brings me to the next thing we need.

A GitHub for Everything

We don’t just need a Readme for the Universe. We need a way of sharing this Readme and changing it as need be. In short, we need a system like GitHub, the popular online service that lets people share and collaborate on software code.
Let’s go back to the form validator I found earlier. Say I made some modifications to it that I think other programmers would find useful. If the validator is on GitHub, I can create a separate but related version — a fork — that people can find and contribute to, in the same way I first did with the original software.

This creates a tree of knowledge, with giant groups of people creating and merging branches, working on their small section and then giving it back to the whole.

GitHub not only enables this collaboration, but every change is logged into separate versions. If someone were so inclined, they could go back and replay the building of the validator, from the very first save all the way up to my changes and whoever changes it after me. This creates a tree of knowledge, with giant groups of people creating and merging branches, working on their small section and then giving it back to the whole.
We should be able to funnel all existing knowledge of how things work — not just software code — into a similar system. That way, if my brain-eye interface needs to be different, I (or my personal eye technician) can “fork” the API. In a way, this sort of thing is already starting to happen. People are using GitHub to share government laws, policy documents, Gregorian chants, and the list goes on. The ultimate goal should be to share everything.
Yes, this idea is similar to what you see on sites like Wikipedia, but the stuff that’s shared on Wikipedia doesn’t let you build much more than another piece of text. We don’t just need to know what things are. We need to know how they work in ways that let us operate on them.

The Open Source Epiphany

If you’ve never programmed, all this can sound a bit, well, abstract. But once you enter the coding world, getting a loose grasp on the fundamentals of programming, you instantly see the utility of open source software. “Oooohhh, I don’t have to build this all myself,” you say. “Thank God for the open source community.” Because so many smart people contribute to open source, it helps get the less knowledgeable up to speed quickly. Those acolytes then pay it forward with their own contributions once they’ve learned enough.
Today, more and more people are jumping on this train. More and more people are becoming programmers of some shape or form. It wasn’t so long ago that basic knowledge of HTML was considered specialized geek speak. But now, it’s a common requirement for almost any desk job. Gone are the days when kids made fun of their parents for not being able to set the clock on the VCR. Now they get mocked for mis-cropping their Facebook profile photos.
These changes are all part of the tech takeover of our lives that is trickling down to the masses. It’s like how the widespread use of cars brought a general mechanical understanding of engines to dads everywhere. And this general increase in aptitude is accelerating along with the technology itself.
Steps are being taken to make programming a skill that most kids get early in school along with general reading, writing, and math. In the not too distant future, people will need to program in some form for their daily lives. Imagine the world before the average person knew how to write a letter, or divide two numbers, compared to now. A similar leap is around the corner…”

EU: Have your say on Future and Emerging Technologies!


European Commission: “Do you have a great idea for a new technology that is not possible yet? Do you think it can become realistic by putting Europe’s best minds on the task? Share your view and the European Commission – via the Future and Emerging Technologies (FET) programme@fet_eu#FET_eu– can make it happen. The consultation is open till 15 June 2014.

The aim of the public consultation launched today is to identify promising and potentially game-changing directions for future research in any technological domain.

Vice-President of the European Commission @NeelieKroesEU, responsible for the Digital Agenda, said: “From protecting the environment to curing disease – the choices and investments we make today will make a difference to the jobs and lives we enjoy tomorrow. Researchers and entrepreneurs, innovators, creators or interested bystanders – whoever you are, I hope you will take this opportunity to take part in determining Europe’s future“.

The consultation is organised as a series of discussions, in which contributors can suggest ideas for a new FET Proactive initiative or discuss the 9 research topics identified in the previous consultation to determine whether they are still relevant today.

The ideas collected via the public consultation will contribute to future FET work programmes, notably the next one (2016-17). This participative process has already been used to draft the current work programme (2014-15).

Background

€2,7 billion will be invested in Future and Emerging Technologies (FET) under the new research programme Horizon 2020#H2020 (2014-2020). This represents a nearly threefold increase in budget compared to the previous research programme, FP7. FET actions are part of the Excellent science pillar of Horizon 2020.

The objective of FET is to foster radical new technologies by exploring novel and high-risk ideas building on scientific foundations. By providing flexible support to goal-oriented and interdisciplinary collaborative research, and by adopting innovative research practices, FET research seizes the opportunities that will deliver long-term benefit for our society and economy.

FET Proactive initiatives aim to mobilise interdisciplinary communities around promising long-term technological visions. They build up the necessary base of knowledge and know-how for kick-starting a future technology line that will benefit Europe’s future industries and citizens in the decades to come. FET Proactive initiatives complement FET Open scheme, which funds small-scale projects on future technology, and FET Flagships, which are large-scale initiatives to tackle ambitious interdisciplinary science and technology goals.

FET previously launched an online consultation (2012-13) to identify research topics for the current work programme. Around 160 ideas were submitted. The European Commission did an exhaustive analysis and produced an informal clustering of these ideas into broad topics. 9 topics were identified as candidates for a FET Proactive initiative. Three are included in the current programme, namely Global Systems Science; Knowing, Doing, Being; and Quantum Simulation.”

The false promise of the digital humanities


Adam Kirsch in the New Republic: “The humanities are in crisis again, or still. But there is one big exception: digital humanities, which is a growth industry. In 2009, the nascent field was the talk of the Modern Language Association (MLA) convention: “among all the contending subfields,” a reporter wrote about that year’s gathering, “the digital humanities seem like the first ‘next big thing’ in a long time.” Even earlier, the National Endowment for the Humanities created its Office of Digital Humanities to help fund projects. And digital humanities continues to go from strength to strength, thanks in part to the Mellon Foundation, which has seeded programs at a number of universities with large grantsmost recently, $1 million to the University of Rochester to create a graduate fellowship.

Despite all this enthusiasm, the question of what the digital humanities is has yet to be given a satisfactory answer. Indeed, no one asks it more often than the digital humanists themselves. The recent proliferation of books on the subjectfrom sourcebooks and anthologies to critical manifestosis a sign of a field suffering an identity crisis, trying to determine what, if anything, unites the disparate activities carried on under its banner. “Nowadays,” writes Stephen Ramsay in Defining Digital Humanities, “the term can mean anything from media studies to electronic art, from data mining to edutech, from scholarly editing to anarchic blogging, while inviting code junkies, digital artists, standards wonks, transhumanists, game theorists, free culture advocates, archivists, librarians, and edupunks under its capacious canvas.”

Within this range of approaches, we can distinguish a minimalist and a maximalist understanding of digital humanities. On the one hand, it can be simply the application of computer technology to traditional scholarly functions, such as the editing of texts. An exemplary project of this kind is the Rossetti Archive created by Jerome McGann, an online repository of texts and images related to the career of Dante Gabriel Rossetti: this is essentially an open-ended, universally accessible scholarly edition. To others, however, digital humanities represents a paradigm shift in the way we think about culture itself, spurring a change not just in the medium of humanistic work but also in its very substance. At their most starry-eyed, some digital humanistssuch as the authors of the jargon-laden manifesto and handbook Digital_Humanitieswant to suggest that the addition of the high-powered adjective to the long-suffering noun signals nothing less than an epoch in human history: “We live in one of those rare moments of opportunity for the humanities, not unlike other great eras of cultural-historical transformation such as the shift from the scroll to the codex, the invention of movable type, the encounter with the New World, and the Industrial Revolution.”

The language here is the language of scholarship, but the spirit is the spirit of salesmanshipthe very same kind of hyperbolic, hard-sell approach we are so accustomed to hearing about the Internet, or  about Apple’s latest utterly revolutionary product. Fundamental to this kind of persuasion is the undertone of menace, the threat of historical illegitimacy and obsolescence. Here is the future, we are made to understand: we can either get on board or stand athwart it and get run over. The same kind of revolutionary rhetoric appears again and again in the new books on the digital humanities, from writers with very different degrees of scholarly commitment and intellectual sophistication.

In Uncharted, Erez Aiden and Jean-Baptiste Michel, the creators of the Google Ngram Vieweran online tool that allows you to map the frequency of words in all the printed matter digitized by Googletalk up the “big data revolution”: “Its consequences will transform how we look at ourselves…. Big data is going to change the humanities, transform the social sciences, and renegotiate the relationship between the world of commerce and the ivory tower.” These breathless prophecies are just hype. But at the other end of the spectrum, even McGann, one of the pioneers of what used to be called “humanities computing,” uses the high language of inevitability: “Here is surely a truth now universally acknowledged: that the whole of our cultural inheritance has to be recurated and reedited in digital forms and institutional structures.”

If ever there were a chance to see the ideological construction of reality at work, digital humanities is it. Right before our eyes, options are foreclosed and demands enforced; a future is constructed as though it were being discovered. By now we are used to this process, since over the last twenty years the proliferation of new technologies has totally discredited the idea of opting out of “the future.”…

The promise and perils of giving the public a policy ‘nudge’


Nicholas Biddle and Katherine Curchin at the Conversation: “…These behavioural insights are more than just intellectual curiosities. They are increasingly being used by policymakers inspired by Richard Thaler and Cass Sunstein’s bestselling manifesto for libertarian paternalism, Nudge.
The British and New South Wales governments have set up behavioural insights units. Many other governments around Australia are following their lead.
Most of the attention so far has been on how behavioural insights could be employed to make people slimmer, greener, more altruistic or better savers. However, it’s time we started thinking and talking about the impact these ideas could have on social policy – programs and payments that aim to reduce disadvantage and narrow divergence in opportunity.
While applying behavioural insights can potentially improve the efficiency and effectiveness of social policy, unscrupulous or poorly thought through applications could be disturbing and damaging. It would appear behavioural insights inspired the UK government’s so-called “Nudge Unit” to force job seekers to undergo bogus personality tests – on pain of losing benefits if they refused.
The idea seemed to be that because people readily believe that any vaguely worded combination of character traits applies to them – which is why people connect with their star sign – the results of a fake psychometric test can dupe them into believing they have a go-getting personality.
In our view, this is not how behavioural insights should be applied. This UK example seems to be a particularly troubling case of the use of “nudges” in conjunction with, rather than instead of, coercion. This is the worst of both worlds: not libertarian paternalism, but authoritarian paternalism.
Ironically, this instance betrays a questionable understanding of behavioural insights or at the very least a very short-term focus. Research tells us that co-operative behaviour depends on the perception of fairness and successful framing requires trust.
Dishonest interventions, which make the government seem both unfair and untrustworthy, should have the longer-term effect of undermining its ability to elicit cooperation and successfully frame information.
Some critics have assumed nudge is inherently conservative or neoliberal. Yet these insights could inform progressive reform in many ways.
For example, taking behavioural insights seriously would encourage a redesign of employment services. There is plenty of scope for thinking more rigorously about how job seekers’ interactions with employment services unintentionally inhibit their motivation to search for work.

Beware accidental nudges

More than just a nudge here or there, behavioural insights can be used to reflect on almost all government decisions. Too often governments accidentally nudge citizens in the opposite direction to where they want them to go.
Take the disappointing take-up of the Matched Savings Scheme, which is part of New Income Management in the Northern Territory. It matches welfare recipients’ savings dollar-for-dollar up to a maximum of A$500 and is meant to get people into the habit of saving regularly.
No doubt saving is extremely hard for people on very low incomes. But another reason so few people embraced the savings program may be a quirk in its design: people had to save money out of their non-income-managed funds, but the $500 reward they received from the government went into their income-managed account.
To some people this appears to have signalled the government’s bad faith. It said to them: even if you demonstrate your responsibility with money, we still won’t trust you.
The Matched Savings Scheme was intended to be a carrot, not a stick. It was supposed to complement the coercive element of income management by giving welfare recipients an incentive to improve their budgeting. Instead it was perceived as an invitation to welfare recipients to be complicit in their own humiliation.
The promise of an extra $500 would have been a strong lure for Homo economicus, but it wasn’t for Homo sapiens. People out of work or on income support are no more or less rational than merchant bankers or economics professors. Their circumstances and choices are different though.
The idiosyncrasies of human decision-making don’t mean that the human brain is fundamentally flawed. Most of the biases that we mentioned earlier are adaptive. But they do mean that policy makers need to appreciate how we differ from rational utility maximisers.”
Real humans are not worse than economic man. We’re just different and we deserve policies made for Homo sapiens, not Homo economicus.