Tanja Aitamurto & Kaiping Chen in The Theory and Practice of Legislation: “While national and local governments increasingly deploy crowdsourcing in lawmaking as an open government practice, it remains unclear how crowdsourcing creates value when it is applied in policymaking. Therefore, in this article, we examine value creation in crowdsourcing for public policymaking. We introduce a framework for analysing value creation in public policymaking in the following three dimensions: democratic, epistemic and economic. Democratic value is created by increasing transparency, accountability, inclusiveness and deliberation in crowdsourced policymaking. Epistemic value is developed when crowdsourcing serves as a knowledge search mechanism and a learning context. Economic value is created when crowdsourcing makes knowledge search in policymaking more efficient and enables government to produce policies that better address citizens’ needs and societal issues. We show how these tenets of value creation are manifest in crowdsourced policymaking by drawing on instances of crowdsourced lawmaking, and we also discuss the contingencies and challenges preventing value creation…(More)”
Information for accountability: Transparency and citizen engagement for improved service delivery in education systems
Lindsay Read and Tamar Manuelyan Atinc at Brookings: “There is a wide consensus among policymakers and practitioners that while access to education has improved significantly for many children in low- and middle-income countries, learning has not kept pace. A large amount of research that has attempted to pinpoint the reasons behind this quality deficit in education has revealed that providing extra resources such as textbooks, learning materials, and infrastructure is largely ineffective in improving learning outcomes at the system level without accompanying changes to the underlying structures of education service delivery and associated systems of accountability.
Information is a key building block of a wide range of strategies that attempts to tackle weaknesses in service delivery and accountability at the school level, even where political systems disappoint at the national level. The dissemination of more and better quality information is expected to empower parents and communities to make better decisions in terms of their children’s schooling and to put pressure on school administrators and public officials for making changes that improve learning and learning environments. This theory of change underpins both social accountability and open data initiatives, which are designed to use information to enhance accountability and thereby influence education delivery.
This report seeks to extract insight into the nuanced relationship between information and accountability, drawing upon a vast literature on bottom-up efforts to improve service delivery, increase citizen engagement, and promote transparency, as well as case studies in Australia, Moldova, Pakistan, and the Philippines. In an effort to clarify processes and mechanisms behind information-based reforms in the education sector, this report also categorizes and evaluates recent impact evaluations according to the intensity of interventions and their target change agents—parents, teachers, school principals, and local officials. The idea here is not just to help clarify what works but why reforms work (or do not)….(More)”
Documenting Hate
Shan Wang at NiemanLab: “A family’s garage vandalized with an image of a swastika and a hateful message targeted at Arabs. Jewish community centers receiving bomb threats. These are just a slice of the incidents of hate across the country after the election of Donald Trump — but getting reliable data on the prevalence of hate and bias crimes to answer questions about whether these sorts of crimes are truly on the rise is nearly impossible.
ProPublica, which led an effort of more than a thousand reporters and students across the U.S. to cover voting problems on Election Day as part of its Electionland project, is now leaning on the collaborative and data-driven Electionland model to track and cover hate crimes.
Documenting Hate, launched last week, is a hate and bias crime-tracking project headed up by ProPublica and supported by a coalition of news and digital media organizations, universities, and civil rights groups like Southern Poverty Law Center (which has been tracking hate groups across the country). Like Electionland, the project is seeking local partners, and will share its data with and guide local reporters interested in writing relevant stories.
“Hate crimes are inadequately tracked,” Scott Klein, assistant manager editor at ProPublica, said. “Local police departments do not report up hate crimes in any consistent way, so the federal data is woefully inadequate, and there’s no good national data on hate crimes. The data is at best locked up by local police departments, and the best we can know is a local undercount.”
Documenting Hate offers a form for anyone to report a hate or bias crime (emphasizing that “we are not law enforcement and will not report this information to the police,” nor will it “share your name and contact information with anybody outside our coalition without your permission”). ProPublica is working with Meedan (whose verification platform Check it also used for Electionland) and crowdsourced crisis-mapping group Ushahidi, as well as several journalism schools, to verify reports coming in through social channels. Ken Schwencke, who helped build the infrastructure for Electionland, is now focused on things like building backend search databases for Documenting Hate, which can be shared with local reporters. The hope is that many stories, interactives, and a comprehensive national database will emerge and paint a fuller picture of the scope of hate crimes in the U.S.
ProPublica is actively seeking local partners, who will have access to the data as well as advice on how to report on sensitive information (no partners to announce just yet, though there’s been plenty of inbound interest, according to Klein). Some of the organizations working with ProPublica were already seeking reader stories of their own….(More)”.
Mass Observation: The amazing 80-year experiment to record our daily lives
William Cook at BBC Arts: “Eighty years ago, on 30th January 1937, the New Statesman published a letter which launched the largest (and strangest) writers’ group in British literary history.
An anthropologist called Tom Harrisson, a journalist called Charles Madge and a filmmaker called Humphrey Jennings wrote to the magazine asking for volunteers to take part in a new project called Mass Observation. Over a thousand readers responded, offering their services. Remarkably, this ‘scientific study of human social behaviour’ is still going strong today.
Mass Observation was the product of a growing interest in the social sciences, and a growing belief that the mass media wasn’t accurately reflecting the lives of so-called ordinary people. Instead of entrusting news gathering to jobbing journalists, who were under pressure to provide the stories their editors and proprietors wanted, Mass Observation recruited a secret army of amateur reporters, to track the habits and opinions of ‘the man in the street.’
Ironically, the three founders of this egalitarian movement were all extremely well-to-do. They’d all been to public schools and Oxbridge, but this was the ‘Age of Anxiety’, when capitalism was in chaos and dangerous demagogues were on the rise (plus ça change…).
For these idealistic public schoolboys, socialism was the answer, and Mass Observation was the future. By finding out what ‘ordinary’ folk were really doing, and really thinking, they would forge a new society, more attuned to the needs of the common man.
Mass Observation selected 500 citizen journalists, and gave them regular ‘directives’ to report back on virtually every aspect of their daily lives. They were guaranteed anonymity, which gave them enormous freedom. People opened up about themselves (and their peers) to an unprecedented degree.
Even though they were all unpaid, correspondents devoted a great deal of time to this endeavour – writing at great length, in great detail, over many years. As well as its academic value, Mass Observation proved that autobiography is not the sole preserve of the professional writer. For all of us, the urge to record and reflect upon our lives is a basic human need.
The Second World War was the perfect forum for this vast collective enterprise. Mass Observation became a national diary of life on the home front. For historians, the value of such uncensored revelations is enormous. These intimate accounts of air raids and rationing are far more revealing and evocative than the jolly state-sanctioned reportage of the war years.
After the war, Mass Observation became more commercial, supplying data for market research, and during the 1960s this extraordinary experiment gradually wound down. It was rescued from extinction by the historian Asa Briggs….
The founders of Mass Observation were horrified by what they called “the revival of racial superstition.” Hitler, Franco and Mussolini were in the forefront of their minds. “We are all in danger of extinction from such outbursts of atavism,” they wrote, in 1937. “We look to science to help us, only to find that science is too busy forging new weapons of mass destruction.”
For its founders, Mass Observation was a new science which would build a better future. For its countless correspondents, however, it became something more than that – not merely a social science, but a communal work of art….(More)”.
The science of society: From credible social science to better social policies
Nancy Cartwright and Julian Reiss at LSE Blog: “Society invests a great deal of money in social science research. Surely the expectation is that some of it will be useful not only for understanding ourselves and the societies we live in but also for changing them? This is certainly the hope of the very active evidence-based policy and practice movement, which is heavily endorsed in the UK both by the last Labour Government and by the current Coalition Government. But we still do not know how to use the results of social science in order to improve society. This has to change, and soon.
Last year the UK launched an extensive – and expensive – new What Works Network that, as the Government press release describes, consists of “two existing centres of excellence – the National Institute for Health and Clinical Excellence (NICE) and the Educational Endowment Foundation – plus four new independent institutions responsible for gathering, assessing and sharing the most robust evidence to inform policy and service delivery in tackling crime, promoting active and independent ageing, effective early intervention, and fostering local economic growth”.
This is an exciting and promising initiative. But it faces a series challenge: we remain unable to build real social policies based on the results of social science or to predict reliably what the outcomes of these policies will actually be. This contrasts with our understanding of how to establish the results in the first place.There we have a handle on the problem. We have a reasonable understanding of what kinds of methods are good for establishing what kinds of results and with what (at least rough) degrees of certainty.
There are methods – well thought through – that social scientists learn in the course of their training for constructing a questionnaire, running a randomised controlled trial, conducting an ethnographic study, looking for patterns in large data sets. There is nothing comparably explicit and well thought through about how to use social science knowledge to help predict what will happen when we implement a proposed policy in real, complex situations. Nor is there anything to help us estimate and balance the effectiveness, the evidence, the chances of success, the costs, the benefits, the winners and losers, and the social, moral, political and cultural acceptability of the policy.
To see why this is so difficult think of an analogy: not building social policies but building material technologies. We do not just read off instructions for building a laser – which may ultimately be used to operate on your eyes – from knowledge of basic science. Rather, we piece together a detailed model using heterogeneous knowledge from a mix of physics theories, from various branches of engineering, from experience of how specific materials behave, from the results of trial-and-error, etc. By analogy, building a successful social policy equally requires a mix of heterogeneous kinds of knowledge from radically different sources. Sometimes we are successful at doing this and some experts are very good at it in their own specific areas of expertise. But in both cases – both for material technology and for social technology – there is no well thought through, defensible guidance on how to do it: what are better and worse ways to proceed, what tools and information might be needed, and how to go about getting these. This is true whether we look for general advice that might be helpful across subject areas or advice geared to specific areas or specific kinds of problems. Though we indulge in social technology – indeed we can hardly avoid it – and are convinced that better social science will make for better policies, we do not know how to turn that conviction into a reality.
This presents a real challenge to the hopes for evidence-based policy….(More)”
Be the Change: Saving the World with Citizen Science
Book by It’s so easy to be overwhelmed by everything that is wrong in the world. In 2010, there were 660,000 deaths from malaria. Dire predictions about climate change suggest that sea levels could rise enough to submerge both Los Angeles and London by 2100. Bees are dying, not by the thousands but by the millions.
But what can you do? You’re just one person, right? The good news is that you *can* do something.
It’s called citizen science, and it’s a way for ordinary people like you and me to do real, honest-to-goodness, help-answer-the-big-questions science.
This book introduces you to a world in which it is possible to go on a wildlife survey in a national park, install software on your computer to search for a cure for cancer, have your smartphone log the sound pollution in your city, transcribe ancient Greek scrolls, or sift through the dirt from a site where a mastodon died 11,000 years ago—even if you never finished high school….(More)”
DataCollaboratives.org – A New Resource on Creating Public Value by Exchanging Data
Recent years have seen exponential growth in the amount of data being generated and stored around the world. There is increasing recognition that this data can play a key role in solving some of the most difficult public problems we face.
However, much of the potentially useful data is currently privately held and not available for public insights. Data in the form of web clicks, social “likes,” geo location and online purchases are typically tightly controlled, usually by entities in the private sector. Companies today generate an ever-growing stream of information from our proliferating sensors and devices. Increasingly, they—and various other actors—are asking if there is a way to make this data available for the public good. There is an ongoing search for new models of corporate responsibility in the digital era around data toward the creation of “data collaboratives”.
Today, the GovLab is excited to launch a new resource for Data Collaboratives (datacollaboratives.org). Data Collaboratives are an emerging form of public-private partnership in which participants from different sectors — including private companies, research institutions, and government agencies — exchange data to help solve public problems.
The resource results from different partnerships with UNICEF (focused on creating data collaboratives to improve children’s lives) and Omidyar Network (studying new ways to match (open) data demand and supply to increase impact).
Natalia Adler, a data, research and policy planning specialist and the UNICEF Data Collaboratives Project Lead notes, “At UNICEF, we’re dealing with the world’s most complex problems affecting children. Data Collaboratives offer an exciting opportunity to tap on previously inaccessible datasets and mobilize a wide range of data expertise to advance child rights around the world. It’s all about connecting the dots.”
To better understand the potential of these Collaboratives, the GovLab collected information on dozens of examples from across the world. These many and diverse initiatives clearly suggest the potential of Data Collaboratives to improve people’s lives when done responsibly. As Stefaan Verhulst, co-founder of the GovLab, puts it: “In the coming months and years, Data Collaboratives will be essential vehicles for harnessing the vast stores of privately held data toward the public good.”
In particular, our research to date suggests that Data Collaboratives offer a number of potential benefits, including enhanced:
- Situational Awareness and Response: For example, Orbital Insights and the World Bank are using satellite imagery to measure and track poverty. This technology can, in some instances, “be more accurate than U.S. census data.”
- Public Service Design and Delivery: Global mapping company, Esri, and Waze’s Connected Citizen’s program are using crowdsourced traffic information to help governments design better transportation.
- Knowledge Creation and Transfer: The National Institutes of Health (NIH), the U.S. Food and Drug Administration (FDA), 10 biopharmaceutical companies and a number of non-profit organizations are sharing data to create new, more effective diagnostics and therapies for medical patients.
- Prediction and Forecasting: Intel and the Earth Research Institute at the University of California Santa Barbara (UCSB) are using satellite imagery to predict drought conditions and develop targeted interventions for farmers and governments.
- Impact Assessment and Evaluation: Nielsen and the World Food Program (WFP) have been using data collected via mobile phone surveys to better monitor food insecurity in order to advise the WFP’s resource allocations….(More)
International Open Data Roadmap
IODC16: We have entered the next phase in the evolution of the open data movement. Just making data publicly available can no longer be the beginning and end of every conversation about open data. The focus of the movement is now shifting to building open data communities, and an increasingly sophisticated network of communities have begun to make data truly useful in addressing a myriad of problems facing citizens and their governments around the world:
- More than 40 national and local governments have already committed to implement the principles of the International Open Data Charter;
- Open data is central to many commitments made this year by world leaders, including the Sustainable Development Goals (SDGs), the Paris Climate Agreement, and the G20 Anti Corruption Data Principles; and
- Open data is also an increasingly local issue, as hundreds of cities and sub-national governments implement open data policies to drive transparency, economic growth, and service delivery in close collaboration with citizens.
To further accelerate collaboration and increase the impact of open data activities globally, the Government of Spain, the International Development Research Centre, the World Bank, and the Open Data for Development Network recently hosted the fourth International Open Data Conference (IODC) on October 6-7, 2106 in Madrid, Spain.
Under the theme of Global Goals, Local Impact, the fourth IODC reconvened an ever expanding open data community to showcase best practices, confront shared challenges, and deepen global and regional collaboration in an effort to maximize the impact of open data. Supported by a full online archive of the 80+ sessions and 20+ special events held in Madrid during the first week of October 2016, this report reflects on the discussions and debates that took place, as well as the information shared on a wide range of vibrant global initiatives, in order to map out the road ahead, strengthen cohesion among existing efforts, and explore new ways to use open data to drive social and economic inclusion around the world….(More)”
Open Data Inventory 2016
“Open Data Watch is pleased to announce the release of the 2016 Open Data Inventory (ODIN). The new ODIN results provide a comprehensive review of the coverage and openness of official statistics in 173 countries around the world, including most OECD countries. Featuring a methodology updated to reflect the latest international open data standards, ODIN 2016 results are fully available online at odin.opendatawatch.com, including interactive functions to compare year-to-year results from 122 countries.
ODIN assesses the coverage and openness of data provided on the websites maintained by national statistical offices (NSOs). The overall ODIN score is an indicator of how complete and open an NSO’s data offerings are. In addition to ratings of coverage and openness in twenty statistical categories, ODIN assessments provide the online location of key indicators in each data category, permitting quick access to hundreds of indicators.
ODIN 2016 Top Scores Reveal Gaps Between Openness and Coverage
In the 2016 round, the top scores went to high-income and OECD countries. Sweden was ranked first overall with a score of 81. Sweden was also the most open site, with an openness score of 91. Among non-OECD countries, the highest rank was Lithuania with an overall score of 77. Among non-high-income countries, Mexico again earned the highest ranking with a score of 67, followed by the lower-middle-income economies of Mongolia (61), and Moldova (59). Among low-income countries, Rwanda received the highest score of 55. ODIN overall scores are scaled from 0 to 100 and provide equal weighting for social, economic, and environmental statistics….
The new ODIN website allows users to compare and download scores for 2015 and 2016….(More)”
Crowdsourcing, Citizen Science, and Data-sharing
Sapien Labs: “The future of human neuroscience lies in crowdsourcing, citizen science and data sharing but it is not without its minefields.
A recent Scientific American article by Daniel Goodwin, “Why Neuroscience Needs Hackers,” makes the case that neuroscience, like many fields today, is drowning in data, begging for application of advances in computer science like machine learning. Neuroscientists are able to gather realms of neural data, but often without big data mechanisms and frameworks to synthesize them.
The SA article describes the work of Sebastian Seung, a Princeton neuroscientist, who recently mapped the neural connections of the human retina from an “overwhelming mass” of electron microscopy data using state of the art A.I. and massive crowd-sourcing. Seung incorporated the A.I. into a game called “Eyewire” where 1,000s of volunteers scored points while improving the neural map. Although the article’s title emphasizes advanced A.I., Dr. Seung’s experiment points even more to crowdsourcing and open science, avenues for improving research that have suddenly become easy and powerful with today’s internet. Eyewire perhaps epitomizes successful crowdsourcing — using an application that gathers, represents, and analyzes data uniformly according to researchers’ needs.
Crowdsourcing is seductive in its potential but risky for those who aren’t sure how to control it to get what they want. For researchers who don’t want to become hackers themselves, trying to turn the diversity of data produced by a crowd into conclusive results might seem too much of a headache to make it worthwhile. This is probably why the SA article title says we need hackers. The crowd is there but using it depends on innovative software engineering. A lot of researchers could really use software designed to flexibly support a diversity of crowdsourcing, some AI to enable things like crowd validation and big data tools.
The Potential
The SA article also points to Open BCI (brain-computer interface), mentioned here in other posts, as an example of how traditional divisions between institutional and amateur (or “citizen”) science are now crumbling; Open BCI is a community of professional and citizen scientists doing principled research with cheap, portable EEG-headsets producing professional research quality data. In communities of “neuro-hackers,” like NeurotechX, professional researchers, entrepreneurs, and citizen scientists are coming together to develop all kinds of applications, such as “telepathic” machine control, prostheses, and art. Other companies, like Neurosky sell EEG headsets and biosensors for bio-/neuro-feedback training and health-monitoring at consumer affordable pricing. (Read more in Citizen Science and EEG)
Tan Le, whose company Emotiv Lifesciences, also produces portable EEG head-sets, says, in an article in National Geographic, that neuroscience needs “as much data as possible on as many brains as possible” to advance diagnosis of conditions such as epilepsy and Alzheimer’s. Human neuroscience studies have typically consisted of 20 to 50 participants, an incredibly small sampling of a 7 billion strong humanity. For a single lab to collect larger datasets is difficult but with diverse populations across the planet real understanding may require data not even from thousands of brains but millions. With cheap mobile EEG-headsets, open-source software, and online collaboration, the potential for anyone can participate in such data collection is immense; the potential for crowdsourcing unprecedented. There are, however, significant hurdles to overcome….(More)”