Obfuscating with transparency


“These approaches…limit the impact of valuable information in developing policies…”

Under the new policy, studies that do not fully meet transparency criteria would be excluded from use in EPA policy development. This proposal follows unsuccessful attempts to enact the Honest and Open New EPA Science Treatment (HONEST) Act and its predecessor, the Secret Science Reform Act. These approaches undervalue many scientific publications and limit the impact of valuable information in developing policies in the areas that the EPA regulates….In developing effective policies, earnest evaluations of facts and fair-minded assessments of the associated uncertainties are foundational. Policy discussions require an assessment of the likelihood that a particular observation is true and examinations of the short- and long-term consequences of potential actions or inactions, including a wide range of different sorts of costs. Those with training in making these judgments with access to as much relevant information as possible are crucial for this process. Of course, policy development requires considerations other than those related to science. Such discussions should follow clear assessment after access to all of the available evidence. The scientific enterprise should stand up against efforts that distort initiatives aimed to improve scientific practice, just to pursue other agendas…(More)”.

What if a nuke goes off in Washington, D.C.? Simulations of artificial societies help planners cope with the unthinkable


Mitchell Waldrop at Science: “…The point of such models is to avoid describing human affairs from the top down with fixed equations, as is traditionally done in such fields as economics and epidemiology. Instead, outcomes such as a financial crash or the spread of a disease emerge from the bottom up, through the interactions of many individuals, leading to a real-world richness and spontaneity that is otherwise hard to simulate.

That kind of detail is exactly what emergency managers need, says Christopher Barrett, a computer scientist who directs the Biocomplexity Institute at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, which developed the NPS1 model for the government. The NPS1 model can warn managers, for example, that a power failure at point X might well lead to a surprise traffic jam at point Y. If they decide to deploy mobile cell towers in the early hours of the crisis to restore communications, NPS1 can tell them whether more civilians will take to the roads, or fewer. “Agent-based models are how you get all these pieces sorted out and look at the interactions,” Barrett says.

The downside is that models like NPS1 tend to be big—each of the model’s initial runs kept a 500-microprocessor computing cluster busy for a day and a half—forcing the agents to be relatively simple-minded. “There’s a fundamental trade-off between the complexity of individual agents and the size of the simulation,” says Jonathan Pfautz, who funds agent-based modeling of social behavior as a program manager at the Defense Advanced Research Projects Agency in Arlington, Virginia.

But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. In fields as diverse as economics, transportation, public health, and urban planning, more and more decision-makers are taking agent-based models seriously. “They’re the most flexible and detailed models out there,” says Ira Longini, who models epidemics at the University of Florida in Gainesville, “which makes them by far the most effective in understanding and directing policy.”

he roots of agent-based modeling go back at least to the 1940s, when computer pioneers such as Alan Turing experimented with locally interacting bits of software to model complex behavior in physics and biology. But the current wave of development didn’t get underway until the mid-1990s….(More)”.

Modernizing Crime Statistics: New Systems for Measuring Crime


(Second) Report by the National Academies of Sciences, Engineering, and Medicine: “To derive statistics about crime – to estimate its levels and trends, assess its costs to and impacts on society, and inform law enforcement approaches to prevent it – a conceptual framework for defining and thinking about crime is virtually a prerequisite. Developing and maintaining such a framework is no easy task, because the mechanics of crime are ever evolving and shifting: tied to shifts and development in technology, society, and legislation.

Interest in understanding crime surged in the 1920s, which proved to be a pivotal decade for the collection of nationwide crime statistics. Now established as a permanent agency, the Census Bureau commissioned the drafting of a manual for preparing crime statistics—intended for use by the police, corrections departments, and courts alike. The new manual sought to solve a perennial problem by suggesting a standard taxonomy of crime. Shortly after the Census Bureau issued its manual, the International Association of Chiefs of Police in convention adopted a resolution to create a Committee on Uniform Crime Records —to begin the process of describing what a national system of data on crimes known to the police might look like.

Report 1 performed a comprehensive reassessment of what is meant by crime in U.S. crime statistics and recommends a new classification of crime to organize measurement efforts. This second report examines methodological and implementation issues and presents a conceptual blueprint for modernizing crime statistics….(More)”.

UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.

From Texts to Tweets to Satellites: The Power of Big Data to Fill Gender Data Gaps


 at UN Foundation Blog: “Twitter posts, credit card purchases, phone calls, and satellites are all part of our day-to-day digital landscape.

Detailed data, known broadly as “big data” because of the massive amounts of passively collected and high-frequency information that such interactions generate, are produced every time we use one of these technologies. These digital traces have great potential and have already developed a track record for application in global development and humanitarian response.

Data2X has focused particularly on what big data can tell us about the lives of women and girls in resource-poor settings. Our research, released today in a new report, Big Data and the Well-Being of Women and Girls, demonstrates how four big data sources can be harnessed to fill gender data gaps and inform policy aimed at mitigating global gender inequality. Big data can complement traditional surveys and other data sources, offering a glimpse into dimensions of girls’ and women’s lives that have otherwise been overlooked and providing a level of precision and timeliness that policymakers need to make actionable decisions.

Here are three findings from our report that underscore the power and potential offered by big data to fill gender data gaps:

  1. Social media data can improve understanding of the mental health of girls and women.

Mental health conditions, from anxiety to depression, are thought to be significant contributors to the global burden of disease, particularly for young women, though precise data on mental health is sparse in most countries. However, research by Georgia Tech University, commissioned by Data2X, finds that social media provides an accurate barometer of mental health status…..

  1. Cell phone and credit card records can illustrate women’s economic and social patterns – and track impacts of shocks in the economy.

Our spending priorities and social habits often indicate economic status, and these activities can also expose economic disparities between women and men.

By compiling cell phone and credit card records, our research partners at MIT traced patterns of women’s expenditures, spending priorities, and physical mobility. The research found that women have less mobility diversity than men, live further away from city centers, and report less total expenditure per capita…..

  1. Satellite imagery can map rivers and roads, but it can also measure gender inequality.

Satellite imagery has the power to capture high-resolution, real-time data on everything from natural landscape features, like vegetation and river flows, to human infrastructure, like roads and schools. Research by our partners at the Flowminder Foundation finds that it is also able to measure gender inequality….(More)”.

Participatory Budgeting: Step to Building Active Citizenship or a Distraction from Democratic Backsliding?


David Sasaki: “Is there any there there? That’s what we wanted to uncover beneath the hype and skepticism surrounding participatory budgeting, an innovation in democracy that began in Brazil in 1989 and has quickly spread to nearly every corner of the world like a viral hashtag….We ended up selecting two groups of consultants for two phases of work. The first phase was led by three academic researchers — Brian WamplerMike Touchton and Stephanie McNulty — to synthesize what we know broadly about PB’s impact and where there are gaps in the evidence. mySociety led the second phase, which originally intended to identify the opportunities and challenges faced by civil society organizations and public officials that implement participatory budgeting. However, a number of unforeseen circumstances, including contested elections in Kenya and a major earthquake in Mexico, shifted mySociety’s focus to take a global, field-wide perspective.

In the end, we were left with two reports that were similar in scope and differed in perspective. Together they make for compelling reading. And while they come from different perspectives, they settle on similar recommendations. I’ll focus on just three: 1) the need for better research, 2) the lack of global coordination, and 3) the emerging opportunity to link natural resource governance with participatory budgeting….

As we consider some preliminary opportunities to advance participatory budgeting, we are clear-eyed about the risks and challenges. In the face of democratic backsliding and the concern that liberal democracy may not survive the 21st century, are these efforts to deepen local democracy merely a distraction from a larger threat, or is this a way to build active citizenship? Also, implementing PB is expensive — both in terms of money and time; is it worth the investment? Is PB just the latest checkbox for governments that want a reputation for supporting citizen participation without investing in the values and process it entails? Just like the proliferation of fake “consultation meetings,” fake PB could merely exacerbate our disappointment with democracy. What should we make of the rise of participatory budgeting in quasi-authoritarian contexts like China and Russia? Is PB a tool for undemocratic central governments to keep local governments in check while giving citizens a simulacrum of democratic participation? Crucially, without intentional efforts to be inclusive like we’ve seen in Boston, PB could merely direct public resources to those neighborhoods with the most outspoken and powerful residents.

On the other hand, we don’t want to dismiss the significant opportunities that come with PB’s rapid global expansion. For example, what happens when social movements lose their momentum between election cycles? Participatory budgeting could create a civic space for social movements to pursue concrete outcomes while engaging with neighbors and public officials. (In China, it has even helped address the urban-rural divide on perspectives toward development policy.) Meanwhile, social media have exacerbated our human tendency to complain, but participatory budgeting requires us to shift our perspective from complaints to engaging with others on solutions. It could even serve as a gateway to deeper forms of democratic participation and increased trust between governments, civil society organizations, and citizens. Perhaps participatory budgeting is the first step we need to rebuild our civic infrastructure and make space for more diverse voices to steer our complex public institutions.

Until we have more research and evidence, however, these possibilities remain speculative….(More)”.

Behavioral Economics: Are Nudges Cost-Effective?


Carla Fried at UCLA Anderson Review: “Behavioral science does not suffer from a lack of academic focus. A Google Scholar search for the term delivers more than three million results.

While there is an abundance of research into how human nature can muck up our decision making process and the potential for well-placed nudges to help guide us to better outcomes, the field has kept rather mum on a basic question: Are behavioral nudges cost-effective?

That’s an ever more salient question as the art of the nudge is increasingly being woven into public policy initiatives. In 2009, the Obama administration set up a nudge unit within the White House Office of Information and Technology, and a year later the U.K. government launched its own unit. Harvard’s Cass Sunstein, co-author of the book Nudge, headed the U.S. effort. His co-author, the University of Chicago’s Richard Thaler — who won the 2017 Nobel Prize in Economics — helped develop the U.K.’s Behavioral Insights office. Nudge units are now humming away in other countries, including Germany and Singapore, as well as at the World Bank, various United Nations agencies and the Organisation for Economic Co-operation and Development (OECD).

Given the interest in the potential for behavioral science to improve public policy outcomes, a team of nine experts, including UCLA Anderson’s Shlomo Benartzi, Sunstein and Thaler, set out to explore the cost-effectiveness of behavioral nudges relative to more traditional forms of government interventions.

In addition to conducting their own experiments, the researchers looked at published research that addressed four areas where public policy initiatives aim to move the needle to improve individuals’ choices: saving for retirement, applying to college, energy conservation and flu vaccinations.

For each topic, they culled studies that focused on both nudge approaches and more traditional mandates such as tax breaks, education and financial incentives, and calculated cost-benefit estimates for both types of studies. Research used in this study was published between 2000 and 2015. All cost estimates were inflation-adjusted…

The study itself should serve as a nudge for governments to consider adding nudging to their policy toolkits, as this approach consistently delivered a high return on investment, relative to traditional mandates and policies….(More)”.

Making sense of evidence: A guide to using evidence in policy


Handbook by the Government of New Zealand: “…helps you take a structured approach to using evidence at every stage of the policy and programme development cycle. Whether you work for central or local government, or the community and voluntary sector, you’ll find advice to help you:

  • understand different types and sources of evidence
  • know what you can learn from evidence
  • appraise evidence and rate its quality
  • decide how to select and use evidence to the best effect
  • take into account different cultural values and knowledge systems
  • be transparent about how you’ve considered evidence in your policy development work…(More)”

(See also Summary; This handbook is a companion to Making sense of evaluation: A handbook for everyone.).

Managing Public Trust


Book edited by Barbara Kożuch, Sławomir J. Magala and Joanna Paliszkiewicz: “This book brings together the theory and practice of managing public trust. It examines the current state of public trust, including a comprehensive global overview of both the research and practical applications of managing public trust by presenting research from seven countries (Brazil, Finland, Poland, Hungary, Portugal, Taiwan, Turkey) from three continents. The book is divided into five parts, covering the meaning of trust, types, dimension and the role of trust in management; the organizational challenges in relation to public trust; the impact of social media on the development of public trust; the dynamics of public trust in business; and public trust in different cultural contexts….(More)”.

The Power Of The Wikimedia Movement Beyond Wikimedia


Michael Bernick at Forbes: “In January 2017, we the constituents of Wikimedia, started an ambitious discussion about our collective future. We reflected on our past sixteen years together and imagined the impact we could have in the world in the next decades. Our aim was to identify a common strategic direction that would unite and inspire people across our movement on our way to 2030, and help us make decisions.”…

The final documents included a strategic direction and a research report: “Wikimedia 2030: Wikimedia’s Role in Shaping the Future of the Information Commons”: an expansive look at Wikimedia, knowledge, technologies, and communications in the next decade. It includes thoughtful sections on Demographics (global population trends, and Wikimedia’s opportunities for growth) Emerging Platforms (how Wikimedia platforms will be accessed), Misinformation (how content creators and technologists can work toward a product that is trustworthy), Literacy (changing forms of learning that can benefit from the Wikimedia movement) and the core Wikimedia issues of Open Knowledge and knowledge as a service.

Among its goals, the document calls for greater outreach to areas outside of Europe and North America (which now account for 63% of Wikimedia’s total traffic), and widening the knowledge and experiential bases of contributors. It urges greater access through mobile devices and other emerging hardware; and expanding partnerships with libraries, museums, galleries and archives.

The document captures not only the idealism of the enterprise, and but also why Wikimedia can be described as a movement not only an enterprise. It calls into question conventional wisdoms of how our political and business structures should operate.

Consider the Wikimedia editing process that seeks to reach common ground on contentious issues. Lisa Gruwell, the Chief Advancement Officer of the Wikimedia Foundation, notes that in the development of an article, often editors with diverging claims and views will weigh in.  Rather than escalating divisions, the process of editing has been found to reduce these divisions. Gruwell explains,

Through the collaborative editing process, the editors have critical discussions about what reliable sources say about a topic. They have to engage and defend their own perspectives about how an article should be represented, and ultimately find some form of common ground with other editors.

A number of researchers at Harvard Business School led by Shane Greenstein, Yuan Gu and Feng Zhu actually set out to study this phenomenon. Their findings, published in 2017 as a Harvard Business School working paper found that editors with different political viewpoints tended to dialogue with each other, and over time reduce rather than increase partisanship….(More)”.