The big health data sale


Philip Hunter at the EMBO Journal: “Personal health and medical data are a valuable commodity for a number of sectors from public health agencies to academic researchers to pharmaceutical companies. Moreover, “big data” companies are increasingly interested in tapping into this resource. One such firm is Google, whose subsidiary Deep Mind was granted access to medical records on 1.6 million patients who had been treated at some time by three major hospitals in London, UK, in order to develop a diagnostic app. The public discussion it raised was just another sign of the long‐going tensions between drug companies, privacy advocates, regulators, legislators, insurers and patients about privacy, consent, rights of access and ownership of medical data that is generated in pharmacies, hospitals and doctors’ surgeries. In addition, the rapid growth of eHealth will add a boon of even more health data from mobile phones, portable diagnostic devices and other sources.

These developments are driving efforts to create a legal framework for protecting confidentiality, controlling communication and governing access rights to data. Existing data protection and human rights laws are being modified to account for personal medical and health data in parallel to the campaign for greater transparency and access to clinical trial data. Healthcare agencies in particular will have to revise their procedures for handling medical or research data that is associated with patients.

Google’s foray into medical data demonstrates the key role of health agencies, in this case the Royal Free NHS Trust, which operates the three London hospitals that granted Deep Mind access to patient data. Royal Free approached Deep Mind with a request to develop an app for detecting acute kidney injury, which, according to the Trust, affects more than one in six inpatients….(More)”

Participatory Budgeting — Not A One-Size-Fits All Approach


Alexandra Flynn at Osgood Digital Commons: “Municipal staff and politicians are moving aside to let someone else make budget decisions – community residents. This practice, known as participatory budgeting or PB, is a completely different way of managing public money. It allows the public to both identify projects and programs that they want to see in their neighbourhoods, and to vote on which ones to fund. The process was developed twenty-five years ago and there are now over 1,500 participatory budgets around the world …

There is no one-size-fits all model for participatory budgeting. The UN-Habitat suggests that the following are essential pieces for the introduction of a participatory budgeting process: the will of the mayor, public interest, clarity on administration and the decisionmaking process, education tools on the budgeting process, widely distributed information on the participatory budgeting process through all possible means, and information on infrastructure and public service shortfalls. The UN-Habitat recommends that participatory budgeting should not be used if honesty and transparency are lacking in local administration. Municipal governments should be clear that the final decision rests with the elected representatives of the local authority and that the process does not replace representative democracy with direct referendums.

Municipalities may want to consider the following issues when implementing participatory budgeting in their communities….(More)”

Understanding Institutions: The Science and Philosophy of Living Together


New book by Francesco Guala: “Understanding Institutions proposes a new unified theory of social institutions that combines the best insights of philosophers and social scientists who have written on this topic. Francesco Guala presents a theory that combines the features of three influential views of institutions: as equilibria of strategic games, as regulative rules, and as constitutive rules.

Guala explains key institutions like money, private property, and marriage, and develops a much-needed unification of equilibrium- and rules-based approaches. Although he uses game theory concepts, the theory is presented in a simple, clear style that is accessible to a wide audience of scholars working in different fields. Outlining and discussing various implications of the unified theory, Guala addresses venerable issues such as reflexivity, realism, Verstehen, and fallibilism in the social sciences. He also critically analyses the theory of “looping effects” and “interactive kinds” defended by Ian Hacking, and asks whether it is possible to draw a demarcation between social and natural science using the criteria of causal and ontological dependence. Focusing on current debates about the definition of marriage, Guala shows how these abstract philosophical issues have important practical and political consequences.

Moving beyond specific cases to general models and principles, Understanding Institutions offers new perspectives on what institutions are, how they work, and what they can do for us….(More)”

Building a Civic Tech Sector to Last: Design Principles to Generate a Civic Tech Movement


Stefaan G. Verhulst at Positive Returns (Medium): “Over the last few years we have seen growing recognition of the potential of “civic tech,” or the use of technology that “empowers citizens to make government more accessible, efficient and effective (definition provided in “Engines of Change”)”. One commentator recently described “civic tech as the next big thing.” At the same time, we are yet to witness a true tech-enabled transformation of how government works and how citizens engage with institutions and with each other to solve societal problems. In many ways, civic tech still operates under the radar screen and often lacks broad acceptance. So how do we accelerate and expand the civic tech sector? How can we build a civic tech field that can last and stand the test of time?

The “Engines of Change” report written for Omidyar Network by Purpose seeks to provide an answer to these questions in the context of the United States….

Given the new insights gained from the report, how to move forward? How to translate its findings into a strategy that seeks to improve people’s lives and addresses societal problems by leveraging technology? What emerges from reading the report, and reflecting on how fields and movements have been built in other areas (e.g., the digital learning movement by theMacArthur Foundation or the Hewlett Foundation’s efforts to build a conflict resolution field), are a set of design principles that, when applied consistently, may generate a true lasting civic tech movement. These principles include:

  • Define a common problem that matters enough to work on collectively and identify a unique opportunity to solve it. Most successful movements seek to solve hard problems. So what is the problem that civic tech seeks to address? …
  • Encourage experimentation. As it stands, there is no shortage of experimentation with new platforms and tools in the civic tech space.What is missing, however, is the type of assessment that uncovers whether or not such efforts are actually working, and why or why not. Rather than viewing experimentation as simply “trying new things,” the field could embrace “fast-cycle action research” to understand both more quickly, and more precisely, when an innovation works, for whom, and under what conditions.
  • Establish an evidence base and a common set of metrics. While there is good reason to believe that breakthrough solutions may come from using technology, there are still too little studies measuring exactly how impactful civic tech is. Without a deeper understanding of whether, when, why and to what extent an intervention has made an impact, the civic tech movement will lack credibility. To accelerate the rate of experimentation and create more agile institutions capable of piloting civic tech solutions, we need research that will enable the sector to move away from “faith-based” initiatives toward “evidence-based” ones. The TicTec conference, the Opening Governance Research Network and the recently launched Open Governance Research Exchange are some initiatives that seek to address this shortcoming. Yet more analysis and translation of current findings into clear baselines of impact against common metrics is needed to make the sector more reliable.
  • Develop a Network Infrastructure…
  • Identify the signal…

As every engineer knows, building engines requires a set of basic design principles. Similarly, transforming the civic tech sector into a sustainable engine of change may require the implementation of the principles outlined above. Let’s build a civic tech sector to last….(More)”

What Governments Can Learn From Airbnb And the Sharing Economy


 in Fortune: “….Despite some regulators’ fears, the sharing economy may not result in the decline of regulation but rather in its opposite, providing a basis upon which society can develop more rational, ethical, and participatory models of regulation. But what regulation looks like, as well as who actually creates and enforce the regulation, is also bound to change.

There are three emerging models – peer regulation, self-regulatory organizations, and data-driven delegation – that promise a regulatory future for the sharing economy best aligned with society’s interests. In the adapted book excerpt that follows, I explain how the third of these approaches, of delegating enforcement of regulations to companies that store critical data on consumers, can help mitigate some of the biases Airbnb guests may face, and why this is a superior alternative to the “open data” approach of transferring consumer information to cities and state regulators.

Consider a different problem — of collecting hotel occupancy taxes from hundreds of thousands of Airbnb hosts rather than from a handful of corporate hotel chains. The delegation of tax collection to Airbnb, something a growing number of cities are experimenting with, has a number of advantages. It is likely to yield higher tax revenues and greater compliance than a system where hosts are required to register directly with the government, which is something occasional hosts seem reluctant to do. It also sidesteps privacy concerns resulting from mandates that digital platforms like Airbnb turn over detailed user data to the government. There is also significant opportunity for the platform to build credibility as it starts to take on quasi governmental roles like this.

There is yet another advantage, and the one I believe will be the most significant in the long-run. It asks a platform to leverage its data to ensure compliance with a set of laws in a manner geared towards delegating responsibility to the platform. You might say that the task in question here — computing tax owed, collecting, and remitting it—is technologically trivial. True. But I like this structure because of the potential it represents. It could be a precursor for much more exciting delegated possibilities.

For a couple of decades now, companies of different kinds have been mining the large sets of “data trails” customers provide through their digital interactions. This generates insights of business and social importance. One such effort we are all familiar with is credit card fraud detection. When an unusual pattern of activity is detected, you get a call from your bank’s security team. Sometimes your card is blocked temporarily. The enthusiasm of these digital security systems is sometimes a nuisance, but it stems from your credit card company using sophisticated machine learning techniques to identify patterns that prior experience has told it are associated with a stolen card. It saves billions of dollars in taxpayer and corporate funds by detecting and blocking fraudulent activity swiftly.

A more recent visible example of the power of mining large data sets of customer interaction came in 2008, when Google engineers announced that they could predict flu outbreaks using data collected from Google searches, and track the spread of flu outbreaks in real time, providing information that was well ahead of the information available using the Center for Disease Control’s (CDC) own tracking systems. The Google system’s performance deteriorated after a couple of years, but its impact on public perception of what might be possible using “big data” was immense.

It seems highly unlikely that such a system would have emerged if Google had been asked to hand over anonymized search data to the CDC. In fact, there would have probably been widespread public backlash to this on privacy grounds. Besides, the reason why this capability emerged organically from within Google is partly as a consequence of Google having one of the highest concentrations of computer science and machine learning talent in the world.

Similar approaches hold great promise as a regulatory approach for sharing economy platforms. Consider the issue of discriminatory practices. There has long been anecdotal evidence that some yellow cabs in New York discriminate against some nonwhite passengers. There have been similar concerns that such behavior may start to manifest on ridesharing platforms and in other peer-to-peer markets for accommodation and labor services.

For example, a 2014 study by Benjamin Edelman and Michael Luca of Harvard suggested that African American hosts might have lower pricing power than white hosts on Airbnb. While the study did not conclusively establish that the difference is due to guests discriminating against African American hosts, a follow-up study suggested that guests with “distinctively African American names” were less likely to receive favorable responses for their requests to Airbnb hosts. This research raises a red flag about the need for vigilance as the lines between personal and professional blur.

One solution would be to apply machine-learning techniques to be able to identify patterns associated with discriminatory behavior. No doubt, many platforms are already using such systems….(More)”

Power to the people: how cities can use digital technology to engage and empower citizens


Tom Saunders at NESTA: “You’re sat in city hall one day and you decide it would be a good idea to engage residents in whatever it is you’re working on – next year’s budget, for example, or the redevelopment of a run down shopping mall. How do you go about it?

In the past, you might have held resident meetings and exhibitions where people could view proposed designs or talk to city government employees. You can still do that today, but now there’s digital: apps, websites and social media. So you decide on a digital engagement strategy: you build a website or you run a social media campaign inviting feedback on your proposals. What happens next?

Two scenarios: 1) You get 50 responses, mostly from campaign groups and local political activists; or 2) you receive such a huge number of responses that you don’t know what to do with them. Besides which, you don’t have the power or budget to implement 90 per cent of the suggestions and neither do you have the time to tell people why their proposals will be ignored. The main outcome of your citizen engagement exercise seems to be that you have annoyed the very people you were trying to get buy in from. What went wrong?

Four tips for digital engagement

With all the apps and platforms out there, it’s hard to make sense of what is going on in the world of digital tools for citizen engagement. It seems there are three distinct activities that digital tools enable: delivering council services online – say applying for a parking permit; using citizen generated data to optimise city government processes and engaging citizens in democratic exercises. In Conneced Councils Nesta sets out what future models of online service delivery could look like. Here I want to focus on the ways that engaging citizens with digital technology can help city governments deliver services more efficiently and improve engagement in democratic processes.

  1. Resist the temptation to build an app…

  1. Think about what you want to engage citizens for…

Sometimes engagement is statutory: communities have to be shown new plans for their area. Beyond this, there are a number of activities that citizen engagement is useful for. When designing a citizen engagement exercise it may help to think which of the following you are trying to achieve (note: they aren’t mutually exclusive):

  • Better understanding of the facts

If you want to use digital technologies to collect more data about what is happening in your city, you can buy a large number of sensors and install them across the city, to track everything from people movements to how full bins are. A cheaper and possibly more efficient way for cities to do this might involve working with people to collect this data – making use of the smartphones that an increasing number of your residents carry around with them. Prominent examples of this included flood mapping in Jakarta using geolocated tweets and pothole mapping in Boston using a mobile app.

For developed world cities, the thought of outsourcing flood mapping to citizens might fill government employees with horror. But for cities in developing countries, these technologies present an opportunity, potentially, for them to leapfrog their peers – to reach a level of coverage now that would normally require decades of investment in infrastructure to achieve. This is currently a hypothetical situation: cities around the world are only just starting to pilot these ideas and technologies and it will take a number of years before we know how useful they are to city governments.

  • Generating better ideas and options

The examples above involve passive data collection. Moving beyond this to more active contributions, city governments can engage citizens to generate better ideas and options. There are numerous examples of this in urban planning – the use of Minecraft by the UN in Nairobi to collect and visualise ideas for the future development of the community, or the Carticipe platform in France, which residents can use to indicate changes they would like to see in their city on a map.

It’s all very well to create a digital suggestion box, but there is a lot of evidence that deliberation and debate lead to much better ideas. Platforms like BetterReykjavic include a debate function for any idea that is proposed. Based on feedback, the person who submitted the idea can then edit it, before putting it to a public vote – only then, if the proposal gets the required number of votes, is it sent to the city council for debate.

  • Better decision making

As well as enabling better decision making by giving city government employees, better data and better ideas, digital technologies can give the power to make decisions directly to citizens. This is best encapsulated by participatory budgeting – which involves allowing citizens to decide how a percentage of the city budget is spent. Participatory budgeting emerged in Brazil in the 1980s, but digital technologies help city governments reach a much larger audience. ‘Madame Mayor, I have an idea’ is a participatory budgeting process that lets citizens propose and vote on ideas for projects in Paris. Over 20,000 people have registered on the platform and the pilot phase of the project received over 5000 submissions.

  1. Remember that there’s a world beyond the internet…

  1. Pick the right question for the right crowd…

When we talk to city governments and local authorities, they express a number of fears about citizen engagement: Fear of relying on the public for the delivery of critical services, fear of being drowned in feedback and fear of not being inclusive – only engaging with those that are online and motivated. Hopefully, thinking through the issues discussed above may help alleviate some of these fears and make city government more enthusiastic about digital engagement….(More)

How Twitter gives scientists a window into human happiness and health


 at the Conversation: “Since its public launch 10 years ago, Twitter has been used as a social networking platform among friends, an instant messaging service for smartphone users and a promotional tool for corporations and politicians.

But it’s also been an invaluable source of data for researchers and scientists – like myself – who want to study how humans feel and function within complex social systems.

By analyzing tweets, we’ve been able to observe and collect data on the social interactions of millions of people “in the wild,” outside of controlled laboratory experiments.

It’s enabled us to develop tools for monitoring the collective emotions of large populations, find the happiest places in the United States and much more.

So how, exactly, did Twitter become such a unique resource for computational social scientists? And what has it allowed us to discover?

Twitter’s biggest gift to researchers

On July 15, 2006, Twittr (as it was then known) publicly launched as a “mobile service that helps groups of friends bounce random thoughts around with SMS.” The ability to send free 140-character group texts drove many early adopters (myself included) to use the platform.

With time, the number of users exploded: from 20 million in 2009 to 200 million in 2012 and 310 million today. Rather than communicating directly with friends, users would simply tell their followers how they felt, respond to news positively or negatively, or crack jokes.

For researchers, Twitter’s biggest gift has been the provision of large quantities of open data. Twitter was one of the first major social networks to provide data samples through something called Application Programming Interfaces (APIs), which enable researchers to query Twitter for specific types of tweets (e.g., tweets that contain certain words), as well as information on users.

This led to an explosion of research projects exploiting this data. Today, a Google Scholar search for “Twitter” produces six million hits, compared with five million for “Facebook.” The difference is especially striking given that Facebook has roughly five times as many users as Twitter (and is two years older).

Twitter’s generous data policy undoubtedly led to some excellent free publicity for the company, as interesting scientific studies got picked up by the mainstream media.

Studying happiness and health

With traditional census data slow and expensive to collect, open data feeds like Twitter have the potential to provide a real-time window to see changes in large populations.

The University of Vermont’s Computational Story Lab was founded in 2006 and studies problems across applied mathematics, sociology and physics. Since 2008, the Story Lab has collected billions of tweets through Twitter’s “Gardenhose” feed, an API that streams a random sample of 10 percent of all public tweets in real time.

I spent three years at the Computational Story Lab and was lucky to be a part of many interesting studies using this data. For example, we developed a hedonometer that measures the happiness of the Twittersphere in real time. By focusing on geolocated tweets sent from smartphones, we were able to map the happiest places in the United States. Perhaps unsurprisingly, we found Hawaii to be the happiest state and wine-growing Napa the happiest city for 2013.

A map of 13 million geolocated U.S. tweets from 2013, colored by happiness, with red indicating happiness and blue indicating sadness. PLOS ONE, Author provided

These studies had deeper applications: Correlating Twitter word usage with demographics helped us understand underlying socioeconomic patterns in cities. For example, we could link word usage with health factors like obesity, so we built a lexicocalorimeter to measure the “caloric content” of social media posts. Tweets from a particular region that mentioned high-calorie foods increased the “caloric content” of that region, while tweets that mentioned exercise activities decreased our metric. We found that this simple measure correlates with other health and well-being metrics. In other words, tweets were able to give us a snapshot, at a specific moment in time, of the overall health of a city or region.

Using the richness of Twitter data, we’ve also been able to see people’s daily movement patterns in unprecedented detail. Understanding human mobility patterns, in turn, has the capacity to transform disease modeling, opening up the new field of digital epidemiology….(More)”

How technology disrupted the truth


 in The Guardian: “Social media has swallowed the news – threatening the funding of public-interest reporting and ushering in an era when everyone has their own facts. But the consequences go far beyond journalism…

When a fact begins to resemble whatever you feel is true, it becomes very difficult for anyone to tell the difference between facts that are true and “facts” that are not. The leave campaign was well aware of this – and took full advantage, safe in the knowledge that the Advertising Standards Authority has no power to police political claims. A few days after the vote, Arron Banks, Ukip’s largest donor and the main funder of the Leave.EU campaign, told the Guardian that his side knew all along that facts would not win the day. “It was taking an American-style media approach,” said Banks. “What they said early on was ‘Facts don’t work’, and that’s it. The remain campaign featured fact, fact, fact, fact, fact. It just doesn’t work. You have got to connect with people emotionally. It’s the Trump success.”
It was little surprise that some people were shocked after the result to discover that Brexit might have serious consequences and few of the promised benefits. When “facts don’t work” and voters don’t trust the media, everyone believes in their own “truth” – and the results, as we have just seen, can be devastating.

How did we end up here? And how do we fix it?

Twenty-five years after the first website went online, it is clear that we are living through a period of dizzying transition. For 500 years after Gutenberg, the dominant form of information was the printed page: knowledge was primarily delivered in a fixed format, one that encouraged readers to believe in stable and settled truths.

Now, we are caught in a series of confusing battles between opposing forces: between truth and falsehood, fact and rumour, kindness and cruelty; between the few and the many, the connected and the alienated; between the open platform of the web as its architects envisioned it and the gated enclosures of Facebook and other social networks; between an informed public and a misguided mob.

What is common to these struggles – and what makes their resolution an urgent matter – is that they all involve the diminishing status of truth. This does not mean that there are no truths. It simply means, as this year has made very clear, that we cannot agree on what those truths are, and when there is no consensus about the truth and no way to achieve it, chaos soon follows.

Increasingly, what counts as a fact is merely a view that someone feels to be true – and technology has made it very easy for these “facts” to circulate with a speed and reach that was unimaginable in the Gutenberg era (or even a decade ago). A dubious story about Cameron and a pig appears in a tabloid one morning, and by noon, it has flown around the world on social media and turned up in trusted news sources everywhere. This may seem like a small matter, but its consequences are enormous.

In the digital age, it is easier than ever to publish false information, which is quickly shared and taken to be true. “The Truth”, as Peter Chippindale and Chris Horrie wrote in Stick It Up Your Punter!, their history of the Sun newspaper, is a “bald statement which every newspaper prints at its peril”. There are usually several conflicting truths on any given subject, but in the era of the printing press, words on a page nailed things down, whether they turned out to be true or not. The information felt like the truth, at least until the next day brought another update or a correction, and we all shared a common set of facts.

This settled “truth” was usually handed down from above: an established truth, often fixed in place by an establishment. This arrangement was not without flaws: too much of the press often exhibited a bias towards the status quo and a deference to authority, and it was prohibitively difficult for ordinary people to challenge the power of the press. Now, people distrust much of what is presented as fact – particularly if the facts in question are uncomfortable, or out of sync with their own views – and while some of that distrust is misplaced, some of it is not.

In the digital age, it is easier than ever to publish false information, which is quickly shared and taken to be true – as we often see in emergency situations, when news is breaking in real time. To pick one example among many, during the November 2015 Paris terror attacks, rumours quickly spread on social media that the Louvre and Pompidou Centre had been hit, and that François Hollande had suffered a stroke. Trusted news organisations are needed to debunk such tall tales.

Sometimes rumours like these spread out of panic, sometimes out of malice, and sometimes deliberate manipulation, in which a corporation or regime pays people to convey their message. Whatever the motive, falsehoods and facts now spread the same way, through what academics call an “information cascade”. As the legal scholar and online-harassment expert Danielle Citron describes it, “people forward on what others think, even if the information is false, misleading or incomplete, because they think they have learned something valuable.” This cycle repeats itself, and before you know it, the cascade has unstoppable momentum. You share a friend’s post on Facebook, perhaps to show kinship or agreement or that you’re “in the know”, and thus you increase the visibility of their pot to others.
Algorithms such as the one that powers Facebook’s news feed are designed to give us more of what they think we want – which means that the version of the world we encounter every day in our own personal stream has been invisibly curated to reinforce our pre-existing beliefs. When Eli Pariser, the co-founder of Upworthy, coined the term “filter bubble” in 2011, he was talking about how the personalised web – and in particular Google’s personalised search function, which means that no two people’s Google searches are the same – means that we are less likely to be exposed to information that challenges us or broadens our worldview, and less likely to encounter facts that disprove false information that others have shared.

Pariser’s plea, at the time, was that those running social media platforms should ensure that “their algorithms prioritise countervailing views and news that’s important, not just the stuff that’s most popular or most self-validating”. But in less than five years, thanks to the incredible power of a few social platforms, the filter bubble that Pariser described has become much more extreme.

On the day after the EU referendum, in a Facebook post, the British internet activist and mySociety founder, Tom Steinberg, provided a vivid illustration of the power of the filter bubble – and the serious civic consequences for a world where information flows largely through social networks:

I am actively searching through Facebook for people celebrating the Brexit leave victory, but the filter bubble is SO strong, and extends SO far into things like Facebook’s custom search that I can’t find anyone who is happy *despite the fact that over half the country is clearly jubilant today* and despite the fact that I’m *actively* looking to hear what they are saying.

This echo-chamber problem is now SO severe and SO chronic that I can only beg any friends I have who actually work for Facebook and other major social media and technology to urgently tell their leaders that to not act on this problem now is tantamount to actively supporting and funding the tearing apart of the fabric of our societies … We’re getting countries where one half just doesn’t know anything at all about the other.

But asking technology companies to “do something” about the filter bubble presumes that this is a problem that can be easily fixed – rather than one baked into the very idea of social networks that are designed to give you what you and your friends want to see….(More)”

There aren’t any rules on how social scientists use private data. Here’s why we need them.


 at SSRC: “The politics of social science access to data are shifting rapidly in the United States as in other developed countries. It used to be that states were the most important source of data on their citizens, economy, and society. States needed to collect and aggregate large amounts of information for their own purposes. They gathered this directly—e.g., through censuses of individuals and firms—and also constructed relevant indicators. Sometimes state agencies helped to fund social science projects in data gathering, such as the National Science Foundation’s funding of the American National Election Survey over decades. While scholars such as James Scott and John Brewer disagreed about the benefits of state data gathering, they recognized the state’s primary role.

In this world, the politics of access to data were often the politics of engaging with the state. Sometimes the state was reluctant to provide information, either for ethical reasons (e.g. the privacy of its citizens) or self-interest. However, democratic states did typically provide access to standard statistical series and the like, and where they did not, scholars could bring pressure to bear on them. This led to well-understood rules about the common availability of standard data for many research questions and built the foundations for standard academic practices. It was relatively easy for scholars to criticize each other’s work when they were drawing on common sources. This had costs—scholars tended to ask the kinds of questions that readily available data allowed them to ask—but also significant benefits. In particular, it made research more easily reproducible.

We are now moving to a very different world. On the one hand, open data initiatives in government are making more data available than in the past (albeit often without much in the way of background resources or documentation).The new universe of private data is reshaping social science research in some ways that are still poorly understood. On the other, for many research purposes, large firms such as Google or Facebook (or even Apple) have much better data than the government. The new universe of private data is reshaping social science research in some ways that are still poorly understood. Here are some of the issues that we need to think about:…(More)”

Democracy Does Not Cause Growth: The Importance of Endogeneity Arguments


IADB Working Paper by JEL Codes:”This article challenges recent findings that democracy has sizable effects on economic growth. As extensive political science research indicates that economic turmoil is responsible for causing or facilitating many democratic transitions, the paper focuses on this endogeneity concern. Using a worldwide survey of 165 country-specific democracy experts conducted for this study, the paper separates democratic transitions into those occurring for reasons related to economic turmoil, here called endogenous, and those grounded in reasons more exogenous to economic growth. The behavior of economic growth following these more exogenous democratizations strongly indicates that democracy does not cause growth. Consequently, the common positive association between democracy and economic growth is driven by endogenous democratization episodes (i.e., due to faulty identification)….(More)”