Improving Crowdsourcing and Citizen Science as a Policy Mechanism for NASA


Paper by Balcom Brittany: “This article examines citizen science projects, defined as “a form of open collaboration where members of the public participate in the scientific process, including identifying research questions, collecting and analyzing the data, interpreting the results, and problem solving,” as an effective and innovative tool for National Aeronautics and Space Administration (NASA) science in line with the Obama Administration’s Open Government Directive. Citizen science projects allow volunteers with no technical training to participate in analysis of large sets of data that would otherwise constitute prohibitively tedious and lengthy work for research scientists. Zooniverse.com hosts a multitude of popular space-focused citizen science projects, many of which have been extraordinarily successful and have enabled new research publications and major discoveries. This article takes a multifaceted look at such projects by examining the benefits of citizen science, effective game design, and current desktop computer and mobile device usage trends. It offers suggestions of potential research topics to be studied with emerging technologies, policy considerations, and opportunities for outreach. This analysis includes an overview of other crowdsourced research methods such as distributed computing and contests. New research and data analysis of mobile phone usage, scientific curiosity, and political engagement among Zooniverse.com project participants has been conducted for this study…(More)”

Rethinking Smart Cities From The Ground Up


New report byTom Saunders and Peter Baeck (NESTA): “This report tells the stories of cities around the world – from Beijing to Amsterdam, and from London to Jakarta – that are addressing urban challenges by using digital technologies to engage and enable citizens.

Key findings

  • Many ‘top down’ smart city ideas have failed to deliver on their promise, combining high costs and low returns.
  • ‘Collaborative technologies’ offer cities another way to make smarter use of resources, smarter ways of collecting data and smarter ways to make decisions.
  • Collaborative technologies can also help citizens themselves shape the future of their cities.
  • We have created five recommendations for city government who want to make their cities smarter.

As cities bring people together to live, work and play, they amplify their ability to create wealth and ideas. But scale and density also bring acute challenges: how to move around people and things; how to provide energy; how to keep people safe.

‘Smart cities’ offer sensors, ‘big data’ and advanced computing as answers to these challenges, but they have often faced criticism for being too concerned with hardware rather than with people.

In this report we argue that successful smart cities of the future will combine the best aspects of technology infrastructure while making the most of the growing potential of ‘collaborative technologies’, technologies that enable greater collaboration between urban communities and between citizens and city governments.

How will this work in practice? Drawing on examples from all around the world we investigate four emerging methods which are helping city governments engage and enable citizens: the collaborative economy, crowdsourcing data, collective intelligence and crowdfunding.

Policy recommendations

  1. Set up a civic innovation lab to drive innovation in collaborative technologies.
  2. Use open data and open platforms to mobilise collective knowledge.
  3. Take human behaviour as seriously as technology.
  4. Invest in smart people, not just smart technology.
  5. Spread the potential of collaborative technologies to all parts of society….(More)”

Please, Corporations, Experiment on Us


Michelle N. Meyer and Christopher Chabris in the New York Times: ” Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent?

The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise.

But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.

Companies — and other powerful actors, including lawmakers, educators and doctors — “experiment” on us without our consent every time they implement a new policy, practice or product without knowing its consequences. When Facebook started, it created a radical new way for people to share emotionally laden information, with unknown effects on their moods. And when OkCupid started, it advised users to go on dates based on an algorithm without knowing whether it worked.

Why does one “experiment” (i.e., introducing a new product) fail to raise ethical concerns, whereas a true scientific experiment (i.e., introducing a variation of the product to determine the comparative safety or efficacy of the original) sets off ethical alarms?

In a forthcoming article in the Colorado Technology Law Journal, one of us (Professor Meyer) calls this the “A/B illusion” — the human tendency to focus on the risk, uncertainty and power asymmetries of running a test that compares A to B, while ignoring those factors when A is simply imposed by itself.

Consider a hypothetical example. A chief executive is concerned that her employees are taking insufficient advantage of the company’s policy of matching contributions to retirement savings accounts. She suspects that telling her workers how many others their age are making the maximum contribution would nudge them to save more, so she includes this information in personalized letters to them.

If contributions go up, maybe the new policy worked. But perhaps contributions would have gone up anyhow (say, because of an improving economy). If contributions go down, it might be because the policy failed. Or perhaps a declining economy is to blame, and contributions would have gone down even more without the letter.

You can’t answer these questions without doing a true scientific experiment — in technology jargon, an “A/B test.” The company could randomly assign its employees to receive either the old enrollment packet or the new one that includes the peer contribution information, and then statistically compare the two groups of employees to see which saved more.

Let’s be clear: This is experimenting on people without their consent, and the absence of consent is essential to the validity of the entire endeavor. If the C.E.O. were to tell the workers that they had been randomly assigned to receive one of two different letters, and why, that information would be likely to distort their choices.

Our chief executive isn’t so hypothetical. Economists do help corporations run such experiments, but many managers chafe at debriefing their employees afterward, fearing that they will be outraged that they were experimented on without their consent. A company’s unwillingness to debrief, in turn, can be a deal-breaker for the ethics boards that authorize research. So those C.E.O.s do what powerful people usually do: Pick the policy that their intuition tells them will work best, and apply it to everyone….(More)”

Secrecy and Publicity in Votes and Debates


Book edited by Jon Elster: “In the spirit of Jeremy Bentham’s Political Tactics, this volume offers the first comprehensive discussion of the effects of secrecy and publicity on debates and votes in committees and assemblies. The contributors – sociologists, political scientists, historians, and legal scholars – consider the micro-technology of voting (the devil is in the detail), the historical relations between the secret ballot and universal suffrage, the use and abolition of secret voting in parliamentary decisions, and the sometimes perverse effects of the drive for greater openness and transparency in public affairs. The authors also discuss the normative questions of secret versus public voting in national elections and of optimal mixes of secrecy and publicity, as well as the opportunities for strategic behavior created by different voting systems. Together with two previous volumes on Collective Wisdom (Cambrige, 2012) and Majority Decisions (Cambridge, 2014), the book sets a new standard for interdisciplinary work on collective decision-making….(More)”

Forging Trust Communities: How Technology Changes Politics


Book by Irene S. Wu: “Bloggers in India used social media and wikis to broadcast news and bring humanitarian aid to tsunami victims in South Asia. Terrorist groups like ISIS pour out messages and recruit new members on websites. The Internet is the new public square, bringing to politics a platform on which to create community at both the grassroots and bureaucratic level. Drawing on historical and contemporary case studies from more than ten countries, Irene S. Wu’s Forging Trust Communities argues that the Internet, and the technologies that predate it, catalyze political change by creating new opportunities for cooperation. The Internet does not simply enable faster and easier communication, but makes it possible for people around the world to interact closely, reciprocate favors, and build trust. The information and ideas exchanged by members of these cooperative communities become key sources of political power akin to military might and economic strength.

Wu illustrates the rich world history of citizens and leaders exercising political power through communications technology. People in nineteenth-century China, for example, used the telegraph and newspapers to mobilize against the emperor. In 1970, Taiwanese cable television gave voice to a political opposition demanding democracy. Both Qatar (in the 1990s) and Great Britain (in the 1930s) relied on public broadcasters to enhance their influence abroad. Additional case studies from Brazil, Egypt, the United States, Russia, India, the Philippines, and Tunisia reveal how various technologies function to create new political energy, enabling activists to challenge institutions while allowing governments to increase their power at home and abroad.

Forging Trust Communities demonstrates that the way people receive and share information through network communities reveals as much about their political identity as their socioeconomic class, ethnicity, or religion. Scholars and students in political science, public administration, international studies, sociology, and the history of science and technology will find this to be an insightful and indispensable work…(More)”

Introducing the Governance Data Alliance


“The overall assumption of the Governance Data Alliance is that governance data can contribute to improved sustainable economic and human development outcomes and democratic accountability in all countries. The contribution that governance data will make to those outcomes will of course depend on a whole range of issues that will vary across contexts; development processes, policy processes, and the role that data plays vary considerably. Nevertheless, there are some core requirements that need to be met if data is to make a difference, and articulating them can provide a framework to help us understand and improve the impact that data has on development and accountability across different contexts.

We also collectively make another implicit (and important) assumption: that the current state of affairs is vastly insufficient when it comes to the production and usage of high-quality governance data. In other words, the status quo needs to be significantly improved upon. Data gathered from participants in the April 2014 design session help to paint that picture in granular terms. Data production remains highly irregular and ad hoc; data usage does not match data production in many cases (e.g. users want data that don’t exist and do not use data that is currently produced); production costs remain high and inconsistent across producers despite possibilities for economies of scale; and feedback loops between governance data producers and governance data users are either non-existent or rarely employed. We direct readers to http://dataalliance.globalintegrity.org for a fuller treatment of those findings.

Three requirements need to be met if governance data is to lead to better development and accountability outcomes, whether those outcomes are about core “governance” issues such as levels of inclusion, or about service delivery and human development outcomes that may be shaped by the quality of governance. Those requirements are:

  • The availability of governance data.
  • The quality of governance data, including its usability and salience.
  • The informed use of governance data.

(Or to use the metaphor of markets, we face a series of market failures: supply of data is inconsistent and not uniform; user demand cannot be efficiently channeled to suppliers to redirect their production to address those deficiencies; and transaction costs abound through non-existent data standards and lack of predictability.)

If data are not available about those aspects of governance that are expected to have an impact on development outcomes and democratic accountability, no progress will be made. The risk is that data about key issues will be lacking, or that there will be gaps in coverage, whether country coverage, time periods covered, or sectors, or that data sets produced by different actors may not be comparable. This might come about for reasons including the following: a lack of knowledge – amongst producers, and amongst producers and users – about what data is needed and what data is available; high costs, and limited resources to invest in generating data; and, institutional incentives and structures (e.g. lack of autonomy, inappropriate mandate, political suppression of sensitive data, organizational dysfunction – relating, for instance, to National Statistical Offices) that limit the production of governance data….

What A Governance Data Alliance Should Do (Or, Making the Market Work)

During the several months of creative exploration around possibilities for a Governance Data Alliance, dozens of activities were identified as possible solutions (in whole or in part) to the challenges identified above. This note identifies what we believe to be the most important and immediate activities that an Alliance should undertake, knowing that other activities can and should be rolled into an Alliance work plan in the out years as the initiative matures and early successes (and failures) are achieved and digested.

A brief summary of the proposals that follow:

  1. Design and implement a peer-to-peer training program between governance data producers to improve the quality and salience of existing data.
  2. Develop a lightweight data standard to be adopted by producer organizations to make it easier for users to consume governance data.
  3. Mine the 2014 Reform Efforts Survey to understand who actually uses which governance data, currently, around the world.
  4. Leverage the 2014 Reform Efforts Survey “plumbing” to field customized follow-up surveys to better assess what data users seek in future governance data.
  5. Pilot (on a regional basis) coordinated data production amongst producer organizations to fill coverage gaps, reduce redundancies, and respond to actual usage and user preferences….(More) “

Big Data’s Impact on Public Transportation


InnovationEnterprise: “Getting around any big city can be a real pain. Traffic jams seem to be a constant complaint, and simply getting to work can turn into a chore, even on the best of days. With more people than ever before flocking to the world’s major metropolitan areas, the issues of crowding and inefficient transportation only stand to get much worse. Luckily, the traditional methods of managing public transportation could be on the verge of changing thanks to advances in big data. While big data use cases have been a part of the business world for years now, city planners and transportation experts are quickly realizing how valuable it can be when making improvements to city transportation. That hour long commute may no longer be something travelers will have to worry about in the future.

In much the same way that big data has transformed businesses around the world by offering greater insight in the behavior of their customers, it can also provide a deeper look at travellers. Like retail customers, commuters have certain patterns they like to keep to when on the road or riding the rails. Travellers also have their own motivations and desires, and getting to the heart of their actions is all part of what big data analytics is about. By analyzing these actions and the factors that go into them, transportation experts can gain a better understanding of why people choose certain routes or why they prefer one method of transportation over another. Based on these findings, planners can then figure out where to focus their efforts and respond to the needs of millions of commuters.

Gathering the accurate data needed to make knowledgeable decisions regarding city transportation can be a challenge in itself, especially considering how many people commute to work in a major city. New methods of data collection have made that effort easier and a lot less costly. One way that’s been implemented is through the gathering of call data records (CDR). From regular transactions made from mobile devices, information about location, time, and duration of an action (like a phone call) can give data scientists the necessary details on where people are traveling to, how long it takes them to get to their destination, and other useful statistics. The valuable part of this data is the sample size, which provides a much bigger picture of the transportation patterns of travellers.

That’s not the only way cities are using big data to improve public transportation though. Melbourne in Australia has long been considered one of the world’s best cities for public transit, and much of that is thanks to big data. With big data and ad hoc analysis, Melbourne’s acclaimed tram system can automatically reconfigure routes in response to sudden problems or challenges, such as a major city event or natural disaster. Data is also used in this system to fix problems before they turn serious.Sensors located in equipment like tram cars and tracks can detect when maintenance is needed on a specific part. Crews are quickly dispatched to repair what needs fixing, and the tram system continues to run smoothly. This is similar to the idea of the Internet of Things, wherein embedded sensors collect data that is then analyzed to identify problems and improve efficiency.

Sao Paulo, Brazil is another city that sees the value of using big data for its public transportation. The city’s efforts concentrate on improving the management of its bus fleet. With big data collected in real time, the city can get a more accurate picture of just how many people are riding the buses, which routes are on time, how drivers respond to changing conditions, and many other factors. Based off of this information, Sao Paulo can optimize its operations, providing added vehicles where demand is genuine whilst finding which routes are the most efficient. Without big data analytics, this process would have taken a very long time and would likely be hit-or-miss in terms of accuracy, but now, big data provides more certainty in a shorter amount of time….(More)”

Want to fix the world? Start by making clean energy a default setting


Chris Mooney in the Washington Post: “In recent years, psychologists and behavioral scientists have begun to decipher why we make the choices that we do when it comes to using energy. And the bottom line is that it’s hard to characterize those choices as fully “rational.”

Rather than acting like perfect homo economicuses, they’ve found, we’rehighly swayed by the energy use of our neighbors and friends — peer pressure, basically. At the same time, we’re also heavily biased by the status quo — we delay in switching to new energy choices, even when they make a great deal of economic sense.

 All of which has led to the popular idea of “nudging,” or the idea that you can subtly sway people to change their behavior by changing, say, the environment in which they make choices, or the kinds of information they receive. Not in a coercive way, but rather, through gentle tweaks and prompts. And now, a major study in Nature Climate Change demonstrates that one very popular form of energy-use nudging that might be called “default switching,” or the “default effect,” does indeed work — and indeed, could possibly work at a very large scale.

“This is the first demonstration of a large-scale nudging effect using defaults in the domain of energy choices,” says Sebastian Lotz of Stanford University and the University of Lausanne in Switzerland, who conducted the research with Felix Ebeling of the University of Cologne in Germany….(More)”

Flawed Humans, Flawed Justice


Adam Benforado in the New York Times  on using …”lessons from behavioral science to make police and courts more fair…. WHAT would it take to achieve true criminal justice in America?

Imagine that we got rid of all of the cops who cracked racist jokes and prosecutors blinded by a thirst for power. Imagine that we cleansed our courtrooms of lying witnesses and foolish jurors. Imagine that we removed every judge who thought the law should bend to her own personal agenda and every sadistic prison guard.

We would certainly feel just then. But we would be wrong.

We would still have unarmed kids shot in the back and innocent men and women sentenced to death. We would still have unequal treatment, disregarded rights and profound mistreatment.

The reason is simple and almost entirely overlooked: Our legal system is based on an inaccurate model of human behavior. Until recently, we had no way of understanding what was driving people’s thoughts, perceptions and actions in the criminal arena. So, we built our institutions on what we had: untested assumptions about what deceit looks like, how memories work and when punishment is merited.

But we now have tools — from experimental methods and data collection approaches to brain-imaging technologies — that provide an incredible opportunity to establish a new and robust foundation.

Our justice system must be reconstructed upon scientific fact. We can start by acknowledging what the data says about the fundamental flaws in our current legal processes and structures.

Consider the evidence that we treat as nearly unassailable proof of guilt at trial — an unwavering eyewitness, a suspect’s signed confession or a forensic match to the crime scene.

While we charge tens of thousands of people with crimes each year after they are identified in police lineups, research shows that eyewitnesses chose an innocent person roughly one-third of the time. Our memories can fail us because we’re frightened. They can be altered by the word choice of a detective. They can be corrupted by previously seeing someone’s image on a social media site.

Picking out lying suspects from their body language is ineffective. And trying then to gain a confession by exaggerating the strength of the evidence and playing down the seriousness of the offense can encourage people to admit to terrible things they didn’t do.

Even seemingly objective forensic analysis is far from incorruptible. Recent data shows that fingerprint — and even DNA — matches are significantly more likely when the forensic expert is aware that the sample comes from someone the police believe is guilty.

With the aid of psychology, we see there’s a whole host of seemingly extraneous forces influencing behavior and producing systematic distortions. But they remain hidden because they don’t fit into our familiar legal narratives.

We assume that the specific text of the law is critical to whether someone is convicted of rape, but research shows that the details of the criminal code — whether it includes a “force” requirement or excuses a “reasonably mistaken” belief in consent — can be irrelevant. What matters are the backgrounds and identifies of the jurors.

When a black teenager is shot by a police officer, we expect to find a bigot at the trigger.

But studies suggest that implicit bias, rather than explicit racism, is behind many recent tragedies. Indeed, simulator experiments show that the biggest danger posed to young African-American men may not be hate-filled cops, but well-intentioned police officers exposed to pervasive, damaging stereotypes that link the concepts of blackness and violence.

Likewise, Americans have been sold a myth that there are two kinds of judges — umpires and activists — and that being unbiased is a choice that a person makes. But the truth is that all judges are swayed by countless forces beyond their conscious awareness or control. It should have no impact on your case, for instance, whether your parole hearing is scheduled first thing in the morning or right before lunch, but when scientists looked at real parole boards, they found that judges were far more likely to grant petitions at the beginning of the day than they were midmorning.

The choice of where to place the camera in an interrogation room may seem immaterial, yet experiments show that it can affect whether a confession is determined to be coerced. When people watch a recording with the camera behind the detective, they are far more likely to find that the confession was voluntary than when watching the interactions from the perspective of the suspect.

With such challenges to our criminal justice system, what can possibly be done? The good news is that an evidence-based approach also illuminates the path forward.

Once we have clear data that something causes a bias, we can then figure out how to remove that influence. …(More)

The Civic Organization and the Digital Citizen


New book by Chris Wells: “The powerful potential of digital media to engage citizens in political actions has now crossed our news screens many times. But scholarly focus has tended to be on “networked,” anti-institutional forms of collective action, to the neglect of advocacy and service organizations. This book investigates the changing fortunes of the citizen-civil society relationship by exploring how social changes and innovations in communication technology are transforming the information expectations and preferences of many citizens, especially young citizens. In doing so, it is the first work to bring together theories of civic identity change with research on civic organizations. Specifically, it argues that a shift in “information styles” may help to explain the disjuncture felt by many young people when it comes to institutional participation and politics.

The book theorizes two paradigms of information style: a dutiful style, which was rooted in the society, communication system and citizen norms of the modern era, and an actualizing style, which constitutes the set of information practices and expectations of the young citizens of late modernity for whom interactive digital media are the norm. Hypothesizing that civil society institutions have difficulty adapting to the norms and practices of the actualizing information style, two empirical studies apply the dutiful/actualizing framework to innovative content analyses of organizations’ online communications-on their websites, and through Facebook. Results demonstrate that with intriguing exceptions, most major civil society organizations use digital media more in line with dutiful information norms than actualizing ones: they tend to broadcast strategic messages to an audience of receivers, rather than encouraging participation or exchange among an active set of participants. The book concludes with a discussion of the tensions inherent in bureaucratic organizations trying to adapt to an actualizing information style, and recommendations for how they may more successfully do so….(More)”