Soft Data and Public Policy: Can Social Media Offer Alternatives to Official Statistics in Urban Policymaking?


Marta Severo, Amel Feredj and Alberto Romele in Policy & Internet: “In recent years, decision makers have reported difficulties in the use of official statistics in public policy: excessively long publication delays, insufficient coverage of topics of interest, and the top-down process of data creation. The deluge of data available online represents a potential answer to these problems, with social media data in particular as a possible alternative to traditional data. In this article, we propose a definition of “Soft Data” to indicate data that are freely available on the Internet, and that are not controlled by a public administration but rather by public or private actors. The term Soft Data is not intended to replace those of “Big Data” and “Open Data,” but rather to highlight specific properties and research methods required to convert them into information of interest for decision makers. The analysis is based on a case study of Twitter data for urban policymaking carried out for a European research program aimed at enhancing the effectiveness of European cohesion policy. The article explores methodological issues and the possible impact of “Soft Data” on public policy, reporting on semistructured interviews carried out with nine European policymakers….(More)”

Two Laws On Expertise That Make Government Dumber


Beth Noveck in Forbes: “With the announcement of Microsoft’s acquisition of LinkedIn last week comes the prospect of new tech products that can help us visualize more than ever before about what we know and can do. But the buzz about what this might mean for our ability to find a job in the 21st century (and for privacy), obscures a tantalizing possibility for improving government.

Imagine if the Department of Health and Human Services needed to craft a new policy on hospitals. With better tools for automating the identification of expertise from our calendar, email, and document data (Microsoft), our education history and credentials (LinkedIn) skills acquired from training (Lynda), it might become possible to match the demand for know how about healthcare to the supply of those people who have worked in the sector, have degrees in public health, or who have demonstrated passion and know how evident from their volunteer experience.

The technological possibility of matching people to public opportunities to participate in the life of our democracy in ways that relate to our competencies and interests is impeded, however, by two decades-old statutes that prohibit the federal government from taking advantage of the possibilities of technology to tap into the expertise of the American people to solve our hardest problems.

The Federal Advisory Committee Act of 1972 (FACA) and the Paperwork Reduction Act of 1980 (PRA) entrench the committee and consultation practices of an era before the Internet. They make it illegal for wider networks of more diverse people with innovative ideas from convening to help solve public problems and need to be updated for the 21st century….(More)”

Is internet freedom a tool for democracy or authoritarianism?


 and  in the Conversation: “The irony of internet freedom was on full display shortly after midnight July 16 in Turkey when President Erdogan used FaceTime and independent TV news to call for public resistance against the military coup that aimed to depose him.

In response, thousands of citizens took to the streets and aided the government in beating back the coup. The military plotters had taken over state TV. In this digital age they apparently didn’t realize television was no longer sufficient to ensure control over the message.

This story may appear like a triumphant example of the internet promoting democracy over authoritarianism.

Not so fast….This duality of the internet, as a tool to promote democracy or authoritarianism, or simultaneously both, is a complex puzzle.

The U.S. has made increasing internet access around the world a foreign policy priority. This policy was supported by both Secretaries of State John Kerry and Hillary Clinton.

The U.S. State Department has allocated tens of millions of dollars to promote internet freedom, primarily in the area of censorship circumvention. And just this month, the United Nations Human Rights Council passed a resolution declaring internet freedom a fundamental human right. The resolution condemns internet shutdowns by national governments, an act that has become increasingly common in variety of countries across the globe, including Turkey, Brazil, India and Uganda.

On the surface, this policy makes sense. The internet is an intuitive boon for democracy. It provides citizens around the world with greater freedom of expression, opportunities for civil society, education and political participation. And previous research, including our own, has been optimistic about the internet’s democratic potential.

However, this optimism is based on the assumption that citizens who gain internet access use it to expose themselves to new information, engage in political discussions, join social media groups that advocate for worthy causes and read news stories that change their outlook on the world.

And some do.

But others watch Netflix. They use the internet to post selfies to an intimate group of friends. They gain access to an infinite stream of music, movies and television shows. They spend hours playing video games.

However, our recent research shows that tuning out from politics and immersing oneself in online spectacle has political consequences for the health of democracy….Political use of the internet ranks very low globally, compared to other uses. Research has found that just 9 percent of internet users posted links to political news and only 10 percent posted their own thoughts about political or social issues. In contrast, almost three-quarters (72 percent) say they post about movies and music, and over half (54 percent) also say they post about sports online.

This inspired our study, which sought to show how the internet does not necessarily serve as democracy’s magical solution. Instead, its democratic potential is highly dependent on how citizens choose to use it….

Ensuring citizens have access to the internet is not sufficient to ensure democracy and human rights. In fact, internet access may negatively impact democracy if exploited for authoritarian gain.

The U.S. government, NGOs and other democracy advocates have invested a great deal of time and resources toward promoting internet access, fighting overt online censorship and creating circumvention technologies. Yet their success, at best, has been limited.

The reason is twofold. First, authoritarian governments have adapted their own strategies in response. Second, the “if we build it, they will come” philosophy underlying a great deal of internet freedom promotion doesn’t take into account basic human psychology in which entertainment choices are preferred over news and attitudes toward the internet determine its use, not the technology itself.

Allies in the internet freedom fight should realize that the locus of the fight has shifted. Greater efforts must be put toward tearing down “psychological firewalls,” building demand for internet freedom and influencing citizens to employ the internet’s democratic potential.

Doing so ensures that the democratic online toolkit is a match for the authoritarian one….(More)”

Power to the people: how cities can use digital technology to engage and empower citizens


Tom Saunders at NESTA: “You’re sat in city hall one day and you decide it would be a good idea to engage residents in whatever it is you’re working on – next year’s budget, for example, or the redevelopment of a run down shopping mall. How do you go about it?

In the past, you might have held resident meetings and exhibitions where people could view proposed designs or talk to city government employees. You can still do that today, but now there’s digital: apps, websites and social media. So you decide on a digital engagement strategy: you build a website or you run a social media campaign inviting feedback on your proposals. What happens next?

Two scenarios: 1) You get 50 responses, mostly from campaign groups and local political activists; or 2) you receive such a huge number of responses that you don’t know what to do with them. Besides which, you don’t have the power or budget to implement 90 per cent of the suggestions and neither do you have the time to tell people why their proposals will be ignored. The main outcome of your citizen engagement exercise seems to be that you have annoyed the very people you were trying to get buy in from. What went wrong?

Four tips for digital engagement

With all the apps and platforms out there, it’s hard to make sense of what is going on in the world of digital tools for citizen engagement. It seems there are three distinct activities that digital tools enable: delivering council services online – say applying for a parking permit; using citizen generated data to optimise city government processes and engaging citizens in democratic exercises. In Conneced Councils Nesta sets out what future models of online service delivery could look like. Here I want to focus on the ways that engaging citizens with digital technology can help city governments deliver services more efficiently and improve engagement in democratic processes.

  1. Resist the temptation to build an app…

  1. Think about what you want to engage citizens for…

Sometimes engagement is statutory: communities have to be shown new plans for their area. Beyond this, there are a number of activities that citizen engagement is useful for. When designing a citizen engagement exercise it may help to think which of the following you are trying to achieve (note: they aren’t mutually exclusive):

  • Better understanding of the facts

If you want to use digital technologies to collect more data about what is happening in your city, you can buy a large number of sensors and install them across the city, to track everything from people movements to how full bins are. A cheaper and possibly more efficient way for cities to do this might involve working with people to collect this data – making use of the smartphones that an increasing number of your residents carry around with them. Prominent examples of this included flood mapping in Jakarta using geolocated tweets and pothole mapping in Boston using a mobile app.

For developed world cities, the thought of outsourcing flood mapping to citizens might fill government employees with horror. But for cities in developing countries, these technologies present an opportunity, potentially, for them to leapfrog their peers – to reach a level of coverage now that would normally require decades of investment in infrastructure to achieve. This is currently a hypothetical situation: cities around the world are only just starting to pilot these ideas and technologies and it will take a number of years before we know how useful they are to city governments.

  • Generating better ideas and options

The examples above involve passive data collection. Moving beyond this to more active contributions, city governments can engage citizens to generate better ideas and options. There are numerous examples of this in urban planning – the use of Minecraft by the UN in Nairobi to collect and visualise ideas for the future development of the community, or the Carticipe platform in France, which residents can use to indicate changes they would like to see in their city on a map.

It’s all very well to create a digital suggestion box, but there is a lot of evidence that deliberation and debate lead to much better ideas. Platforms like BetterReykjavic include a debate function for any idea that is proposed. Based on feedback, the person who submitted the idea can then edit it, before putting it to a public vote – only then, if the proposal gets the required number of votes, is it sent to the city council for debate.

  • Better decision making

As well as enabling better decision making by giving city government employees, better data and better ideas, digital technologies can give the power to make decisions directly to citizens. This is best encapsulated by participatory budgeting – which involves allowing citizens to decide how a percentage of the city budget is spent. Participatory budgeting emerged in Brazil in the 1980s, but digital technologies help city governments reach a much larger audience. ‘Madame Mayor, I have an idea’ is a participatory budgeting process that lets citizens propose and vote on ideas for projects in Paris. Over 20,000 people have registered on the platform and the pilot phase of the project received over 5000 submissions.

  1. Remember that there’s a world beyond the internet…

  1. Pick the right question for the right crowd…

When we talk to city governments and local authorities, they express a number of fears about citizen engagement: Fear of relying on the public for the delivery of critical services, fear of being drowned in feedback and fear of not being inclusive – only engaging with those that are online and motivated. Hopefully, thinking through the issues discussed above may help alleviate some of these fears and make city government more enthusiastic about digital engagement….(More)

How Twitter gives scientists a window into human happiness and health


 at the Conversation: “Since its public launch 10 years ago, Twitter has been used as a social networking platform among friends, an instant messaging service for smartphone users and a promotional tool for corporations and politicians.

But it’s also been an invaluable source of data for researchers and scientists – like myself – who want to study how humans feel and function within complex social systems.

By analyzing tweets, we’ve been able to observe and collect data on the social interactions of millions of people “in the wild,” outside of controlled laboratory experiments.

It’s enabled us to develop tools for monitoring the collective emotions of large populations, find the happiest places in the United States and much more.

So how, exactly, did Twitter become such a unique resource for computational social scientists? And what has it allowed us to discover?

Twitter’s biggest gift to researchers

On July 15, 2006, Twittr (as it was then known) publicly launched as a “mobile service that helps groups of friends bounce random thoughts around with SMS.” The ability to send free 140-character group texts drove many early adopters (myself included) to use the platform.

With time, the number of users exploded: from 20 million in 2009 to 200 million in 2012 and 310 million today. Rather than communicating directly with friends, users would simply tell their followers how they felt, respond to news positively or negatively, or crack jokes.

For researchers, Twitter’s biggest gift has been the provision of large quantities of open data. Twitter was one of the first major social networks to provide data samples through something called Application Programming Interfaces (APIs), which enable researchers to query Twitter for specific types of tweets (e.g., tweets that contain certain words), as well as information on users.

This led to an explosion of research projects exploiting this data. Today, a Google Scholar search for “Twitter” produces six million hits, compared with five million for “Facebook.” The difference is especially striking given that Facebook has roughly five times as many users as Twitter (and is two years older).

Twitter’s generous data policy undoubtedly led to some excellent free publicity for the company, as interesting scientific studies got picked up by the mainstream media.

Studying happiness and health

With traditional census data slow and expensive to collect, open data feeds like Twitter have the potential to provide a real-time window to see changes in large populations.

The University of Vermont’s Computational Story Lab was founded in 2006 and studies problems across applied mathematics, sociology and physics. Since 2008, the Story Lab has collected billions of tweets through Twitter’s “Gardenhose” feed, an API that streams a random sample of 10 percent of all public tweets in real time.

I spent three years at the Computational Story Lab and was lucky to be a part of many interesting studies using this data. For example, we developed a hedonometer that measures the happiness of the Twittersphere in real time. By focusing on geolocated tweets sent from smartphones, we were able to map the happiest places in the United States. Perhaps unsurprisingly, we found Hawaii to be the happiest state and wine-growing Napa the happiest city for 2013.

A map of 13 million geolocated U.S. tweets from 2013, colored by happiness, with red indicating happiness and blue indicating sadness. PLOS ONE, Author provided

These studies had deeper applications: Correlating Twitter word usage with demographics helped us understand underlying socioeconomic patterns in cities. For example, we could link word usage with health factors like obesity, so we built a lexicocalorimeter to measure the “caloric content” of social media posts. Tweets from a particular region that mentioned high-calorie foods increased the “caloric content” of that region, while tweets that mentioned exercise activities decreased our metric. We found that this simple measure correlates with other health and well-being metrics. In other words, tweets were able to give us a snapshot, at a specific moment in time, of the overall health of a city or region.

Using the richness of Twitter data, we’ve also been able to see people’s daily movement patterns in unprecedented detail. Understanding human mobility patterns, in turn, has the capacity to transform disease modeling, opening up the new field of digital epidemiology….(More)”

How technology disrupted the truth


 in The Guardian: “Social media has swallowed the news – threatening the funding of public-interest reporting and ushering in an era when everyone has their own facts. But the consequences go far beyond journalism…

When a fact begins to resemble whatever you feel is true, it becomes very difficult for anyone to tell the difference between facts that are true and “facts” that are not. The leave campaign was well aware of this – and took full advantage, safe in the knowledge that the Advertising Standards Authority has no power to police political claims. A few days after the vote, Arron Banks, Ukip’s largest donor and the main funder of the Leave.EU campaign, told the Guardian that his side knew all along that facts would not win the day. “It was taking an American-style media approach,” said Banks. “What they said early on was ‘Facts don’t work’, and that’s it. The remain campaign featured fact, fact, fact, fact, fact. It just doesn’t work. You have got to connect with people emotionally. It’s the Trump success.”
It was little surprise that some people were shocked after the result to discover that Brexit might have serious consequences and few of the promised benefits. When “facts don’t work” and voters don’t trust the media, everyone believes in their own “truth” – and the results, as we have just seen, can be devastating.

How did we end up here? And how do we fix it?

Twenty-five years after the first website went online, it is clear that we are living through a period of dizzying transition. For 500 years after Gutenberg, the dominant form of information was the printed page: knowledge was primarily delivered in a fixed format, one that encouraged readers to believe in stable and settled truths.

Now, we are caught in a series of confusing battles between opposing forces: between truth and falsehood, fact and rumour, kindness and cruelty; between the few and the many, the connected and the alienated; between the open platform of the web as its architects envisioned it and the gated enclosures of Facebook and other social networks; between an informed public and a misguided mob.

What is common to these struggles – and what makes their resolution an urgent matter – is that they all involve the diminishing status of truth. This does not mean that there are no truths. It simply means, as this year has made very clear, that we cannot agree on what those truths are, and when there is no consensus about the truth and no way to achieve it, chaos soon follows.

Increasingly, what counts as a fact is merely a view that someone feels to be true – and technology has made it very easy for these “facts” to circulate with a speed and reach that was unimaginable in the Gutenberg era (or even a decade ago). A dubious story about Cameron and a pig appears in a tabloid one morning, and by noon, it has flown around the world on social media and turned up in trusted news sources everywhere. This may seem like a small matter, but its consequences are enormous.

In the digital age, it is easier than ever to publish false information, which is quickly shared and taken to be true. “The Truth”, as Peter Chippindale and Chris Horrie wrote in Stick It Up Your Punter!, their history of the Sun newspaper, is a “bald statement which every newspaper prints at its peril”. There are usually several conflicting truths on any given subject, but in the era of the printing press, words on a page nailed things down, whether they turned out to be true or not. The information felt like the truth, at least until the next day brought another update or a correction, and we all shared a common set of facts.

This settled “truth” was usually handed down from above: an established truth, often fixed in place by an establishment. This arrangement was not without flaws: too much of the press often exhibited a bias towards the status quo and a deference to authority, and it was prohibitively difficult for ordinary people to challenge the power of the press. Now, people distrust much of what is presented as fact – particularly if the facts in question are uncomfortable, or out of sync with their own views – and while some of that distrust is misplaced, some of it is not.

In the digital age, it is easier than ever to publish false information, which is quickly shared and taken to be true – as we often see in emergency situations, when news is breaking in real time. To pick one example among many, during the November 2015 Paris terror attacks, rumours quickly spread on social media that the Louvre and Pompidou Centre had been hit, and that François Hollande had suffered a stroke. Trusted news organisations are needed to debunk such tall tales.

Sometimes rumours like these spread out of panic, sometimes out of malice, and sometimes deliberate manipulation, in which a corporation or regime pays people to convey their message. Whatever the motive, falsehoods and facts now spread the same way, through what academics call an “information cascade”. As the legal scholar and online-harassment expert Danielle Citron describes it, “people forward on what others think, even if the information is false, misleading or incomplete, because they think they have learned something valuable.” This cycle repeats itself, and before you know it, the cascade has unstoppable momentum. You share a friend’s post on Facebook, perhaps to show kinship or agreement or that you’re “in the know”, and thus you increase the visibility of their pot to others.
Algorithms such as the one that powers Facebook’s news feed are designed to give us more of what they think we want – which means that the version of the world we encounter every day in our own personal stream has been invisibly curated to reinforce our pre-existing beliefs. When Eli Pariser, the co-founder of Upworthy, coined the term “filter bubble” in 2011, he was talking about how the personalised web – and in particular Google’s personalised search function, which means that no two people’s Google searches are the same – means that we are less likely to be exposed to information that challenges us or broadens our worldview, and less likely to encounter facts that disprove false information that others have shared.

Pariser’s plea, at the time, was that those running social media platforms should ensure that “their algorithms prioritise countervailing views and news that’s important, not just the stuff that’s most popular or most self-validating”. But in less than five years, thanks to the incredible power of a few social platforms, the filter bubble that Pariser described has become much more extreme.

On the day after the EU referendum, in a Facebook post, the British internet activist and mySociety founder, Tom Steinberg, provided a vivid illustration of the power of the filter bubble – and the serious civic consequences for a world where information flows largely through social networks:

I am actively searching through Facebook for people celebrating the Brexit leave victory, but the filter bubble is SO strong, and extends SO far into things like Facebook’s custom search that I can’t find anyone who is happy *despite the fact that over half the country is clearly jubilant today* and despite the fact that I’m *actively* looking to hear what they are saying.

This echo-chamber problem is now SO severe and SO chronic that I can only beg any friends I have who actually work for Facebook and other major social media and technology to urgently tell their leaders that to not act on this problem now is tantamount to actively supporting and funding the tearing apart of the fabric of our societies … We’re getting countries where one half just doesn’t know anything at all about the other.

But asking technology companies to “do something” about the filter bubble presumes that this is a problem that can be easily fixed – rather than one baked into the very idea of social networks that are designed to give you what you and your friends want to see….(More)”

There aren’t any rules on how social scientists use private data. Here’s why we need them.


 at SSRC: “The politics of social science access to data are shifting rapidly in the United States as in other developed countries. It used to be that states were the most important source of data on their citizens, economy, and society. States needed to collect and aggregate large amounts of information for their own purposes. They gathered this directly—e.g., through censuses of individuals and firms—and also constructed relevant indicators. Sometimes state agencies helped to fund social science projects in data gathering, such as the National Science Foundation’s funding of the American National Election Survey over decades. While scholars such as James Scott and John Brewer disagreed about the benefits of state data gathering, they recognized the state’s primary role.

In this world, the politics of access to data were often the politics of engaging with the state. Sometimes the state was reluctant to provide information, either for ethical reasons (e.g. the privacy of its citizens) or self-interest. However, democratic states did typically provide access to standard statistical series and the like, and where they did not, scholars could bring pressure to bear on them. This led to well-understood rules about the common availability of standard data for many research questions and built the foundations for standard academic practices. It was relatively easy for scholars to criticize each other’s work when they were drawing on common sources. This had costs—scholars tended to ask the kinds of questions that readily available data allowed them to ask—but also significant benefits. In particular, it made research more easily reproducible.

We are now moving to a very different world. On the one hand, open data initiatives in government are making more data available than in the past (albeit often without much in the way of background resources or documentation).The new universe of private data is reshaping social science research in some ways that are still poorly understood. On the other, for many research purposes, large firms such as Google or Facebook (or even Apple) have much better data than the government. The new universe of private data is reshaping social science research in some ways that are still poorly understood. Here are some of the issues that we need to think about:…(More)”

What is Artificial Intelligence?


Report by Mike Loukides and Ben Lorica: “Defining artificial intelligence isn’t just difficult; it’s impossible, not the least because we don’t really understand human intelligence. Paradoxically, advances in AI will help more to define what human intelligence isn’t than what artificial intelligence is.

But whatever AI is, we’ve clearly made a lot of progress in the past few years, in areas ranging from computer vision to game playing. AI is making the transition from a research topic to the early stages of enterprise adoption. Companies such as Google and Facebook have placed huge bets on AI and are already using it in their products. But Google and Facebook are only the beginning: over the next decade, we’ll see AI steadily creep into one product after another. We’ll be communicating with bots, rather than scripted robo-dialers, and not realizing that they aren’t human. We’ll be relying on cars to plan routes and respond to road hazards. It’s a good bet that in the next decades, some features of AI will be incorporated into every application that we touch and that we won’t be able to do anything without touching an application.

Given that our future will inevitably be tied up with AI, it’s imperative that we ask: Where are we now? What is the state of AI? And where are we heading?

Capabilities and Limitations Today

Descriptions of AI span several axes: strength (how intelligent is it?), breadth (does it solve a narrowly defined problem, or is it general?), training (how does it learn?), capabilities (what kinds of problems are we asking it to solve?), and autonomy (are AIs assistive technologies, or do they act on their own?). Each of these axes is a spectrum, and each point in this many-dimensional space represents a different way of understanding the goals and capabilities of an AI system.

On the strength axis, it’s very easy to look at the results of the last 20 years and realize that we’ve made some extremely powerful programs. Deep Blue beat Garry Kasparov in chess; Watson beat the best Jeopardy champions of all time; AlphaGo beat Lee Sedol, arguably the world’s best Go player. But all of these successes are limited. Deep Blue, Watson, and AlphaGo were all highly specialized, single-purpose machines that did one thing extremely well. Deep Blue and Watson can’t play Go, and AlphaGo can’t play chess or Jeopardy, even on a basic level. Their intelligence is very narrow, and can’t be generalized. A lot of work has gone into usingWatson for applications such as medical diagnosis, but it’s still fundamentally a question-and-answer machine that must be tuned for a specific domain. Deep Blue has a lot of specialized knowledge about chess strategy and an encyclopedic knowledge of openings. AlphaGo was built with a more general architecture, but a lot of hand-crafted knowledge still made its way into the code. I don’t mean to trivialize or undervalue their accomplishments, but it’s important to realize what they haven’t done.

We haven’t yet created an artificial general intelligence that can solve a multiplicity of different kinds of problems. We still don’t have a machine that can listen to recordings of humans for a year or two, and start speaking. While AlphaGo “learned” to play Go by analyzing thousands of games, and then playing thousands more against itself, the same software couldn’t be used to master chess. The same general approach? Probably. But our best current efforts are far from a general intelligence that is flexible enough to learn without supervision, or flexible enough to choose what it wants to learn, whether that’s playing board games or designing PC boards.

Toward General Intelligence

How do we get from narrow, domain-specific intelligence to more general intelligence? By “general intelligence,” we don’t necessarily mean human intelligence; but we do want machines that can solve different kinds of problems without being programmed with domain-specific knowledge. We want machines that can make human judgments and decisions. That doesn’t necessarily mean that AI systems will implement concepts like creativity, intuition, or instinct, which may have no digital analogs. A general intelligence would have the ability to follow multiple pursuits and to adapt to unexpected situations. And a general AI would undoubtedly implement concepts like “justice” and “fairness”: we’re already talking about the impact of AI on the legal system….

It’s easier to think of super-intelligence as a matter of scale. If we can create “general intelligence,” it’s easy to assume that it could quickly become thousands of times more powerful than human intelligence. Or, more precisely: either general intelligence will be significantly slower than human thought, and it will be difficult to speed it up either through hardware or software; or it will speed up quickly, through massive parallelism and hardware improvements. We’ll go from thousand-core GPUs to trillions of cores on thousands of chips, with data streaming in from billions of sensors. In the first case, when speedups are slow, general intelligence might not be all that interesting (though it will have been a great ride for the researchers). In the second case, the ramp-up will be very steep and very fast….(More) (Full Report)”

Solving All the Wrong Problems


Allison Arieff in the New York Times:Every day, innovative companies promise to make the world a better place. Are they succeeding? Here is just a sampling of the products, apps and services that have come across my radar in the last few weeks:

A service that sends someone to fill your car with gas.

A service that sends a valet on a scooter to you, wherever you are, to park your car.

A service that will film anything you desire with a drone….

We are overloaded daily with new discoveries, patents and inventions all promising a better life, but that better life has not been forthcoming for most. In fact, the bulk of the above list targets a very specific (and tiny!) slice of the population. As one colleague in tech explained it to me recently, for most people working on such projects, the goal is basically to provide for themselves everything that their mothers no longer do….When everything is characterized as “world-changing,” is anything?

Clay Tarver, a writer and producer for the painfully on-point HBO comedy “Silicon Valley,” said in a recent New Yorker article: “I’ve been told that, at some of the big companies, the P.R. departments have ordered their employees to stop saying ‘We’re making the world a better place,’ specifically because we have made fun of that phrase so mercilessly. So I guess, at the very least, we’re making the world a better place by making these people stop saying they’re making the world a better place.”

O.K., that’s a start. But the impulse to conflate toothbrush delivery with Nobel Prize-worthy good works is not just a bit cultish, it’s currently a wildfire burning through the so-called innovation sector. Products and services are designed to “disrupt” market sectors (a.k.a. bringing to market things no one really needs) more than to solve actual problems, especially those problems experienced by what the writer C. Z. Nnaemeka has described as “the unexotic underclass” — single mothers, the white rural poor, veterans, out-of-work Americans over 50 — who, she explains, have the “misfortune of being insufficiently interesting.”

If the most fundamental definition of design is to solve problems, why are so many people devoting so much energy to solving problems that don’t really exist? How can we get more people to look beyond their own lived experience?

In “Design: The Invention of Desire,” a thoughtful and necessary new book by the designer and theorist Jessica Helfand, the author brings to light an amazing kernel: “hack,” a term so beloved in Silicon Valley that it’s painted on the courtyard of the Facebook campus and is visible from planes flying overhead, is also prison slang for “horse’s ass carrying keys.”

To “hack” is to cut, to gash, to break. It proceeds from the belief that nothing is worth saving, that everything needs fixing. But is that really the case? Are we fixing the right things? Are we breaking the wrong ones? Is it necessary to start from scratch every time?…

Ms. Helfand calls for a deeper embrace of personal vigilance: “Design may provide the map,” she writes, “but the moral compass that guides our personal choices resides permanently within us all.”

Can we reset that moral compass? Maybe we can start by not being a bunch of hacks….(More)”

Bridging data gaps for policymaking: crowdsourcing and big data for development


 for the DevPolicyBlog: “…By far the biggest innovation in data collection is the ability to access and analyse (in a meaningful way) user-generated data. This is data that is generated from forums, blogs, and social networking sites, where users purposefully contribute information and content in a public way, but also from everyday activities that inadvertently or passively provide data to those that are able to collect it.

User-generated data can help identify user views and behaviour to inform policy in a timely way rather than just relying on traditional data collection techniques (census, household surveys, stakeholder forums, focus groups, etc.), which are often cumbersome, very costly, untimely, and in many cases require some form of approval or support by government.

It might seem at first that user-generated data has limited usefulness in a development context due to the importance of the internet in generating this data combined with limited internet availability in many places. However, U-Report is one example of being able to access user-generated data independent of the internet.

U-Report was initiated by UNICEF Uganda in 2011 and is a free SMS based platform where Ugandans are able to register as “U-Reporters” and on a weekly basis give their views on topical issues (mostly related to health, education, and access to social services) or participate in opinion polls. As an example, Figure 1 shows the result from a U-Report poll on whether polio vaccinators came to U-Reporter houses to immunise all children under 5 in Uganda, broken down by districts. Presently, there are more than 300,000 U-Reporters in Uganda and more than one million U-Reporters across 24 countries that now have U-Report. As an indication of its potential impact on policymaking,UNICEF claims that every Member of Parliament in Uganda is signed up to receive U-Report statistics.

Figure 1: U-Report Uganda poll results

Figure 1: U-Report Uganda poll results

U-Report and other platforms such as Ushahidi (which supports, for example, I PAID A BRIBE, Watertracker, election monitoring, and crowdmapping) facilitate crowdsourcing of data where users contribute data for a specific purpose. In contrast, “big data” is a broader concept because the purpose of using the data is generally independent of the reasons why the data was generated in the first place.

Big data for development is a new phrase that we will probably hear a lot more (see here [pdf] and here). The United Nations Global Pulse, for example, supports a number of innovation labs which work on projects that aim to discover new ways in which data can help better decision-making. Many forms of “big data” are unstructured (free-form and text-based rather than table- or spreadsheet-based) and so a number of analytical techniques are required to make sense of the data before it can be used.

Measures of Twitter activity, for example, can be a real-time indicator of food price crises in Indonesia [pdf] (see Figure 2 below which shows the relationship between food-related tweet volume and food inflation: note that the large volume of tweets in the grey highlighted area is associated with policy debate on cutting the fuel subsidy rate) or provide a better understanding of the drivers of immunisation awareness. In these examples, researchers “text-mine” Twitter feeds by extracting tweets related to topics of interest and categorising text based on measures of sentiment (positive, negative, anger, joy, confusion, etc.) to better understand opinions and how they relate to the topic of interest. For example, Figure 3 shows the sentiment of tweets related to vaccination in Kenya over time and the dates of important vaccination related events.

Figure 2: Plot of monthly food-related tweet volume and official food price statistics

Figure 2: Plot of monthly food-related Tweet volume and official food price statistics

Figure 3: Sentiment of vaccine related tweets in Kenya

Figure 3: Sentiment of vaccine-related tweets in Kenya

Another big data example is the use of mobile phone usage to monitor the movement of populations in Senegal in 2013. The data can help to identify changes in the mobility patterns of vulnerable population groups and thereby provide an early warning system to inform humanitarian response effort.

The development of mobile banking too offers the potential for the generation of a staggering amount of data relevant for development research and informing policy decisions. However, it also highlights the public good nature of data collected by public and private sector institutions and the reliance that researchers have on them to access the data. Building trust and a reputation for being able to manage privacy and commercial issues will be a major challenge for researchers in this regard….(More)”