Order Without Intellectual Property Law: Open Science in Influenza


Amy Kapczynski at Cornell Law Review: “Today, intellectual property (IP) scholars accept that IP as an approach to information production has serious limits. But what lies beyond IP? A new literature on “intellectual production without IP” (or “IP without IP”) has emerged to explore this question, but its examples and explanations have yet to convince skeptics.

This Article reorients this new literature via a study of a hard case: a global influenza virus-sharing network that has for decades produced critically important information goods, at significant expense, and in a loose-knit group — all without recourse to IP. I analyze the Network as an example of “open science,” a mode of information production that differs strikingly from conventional IP, and yet that successfully produces important scientific goods in response to social need.

The theory and example developed here refute the most powerful criticisms of the emerging “IP without IP” literature, and provide a stronger foundation for this important new field. Even where capital costs are high, creation without IP can be reasonably effective in social terms, if it can link sources of funding to reputational and evaluative feedback loops like those that characterize open science. It can also be sustained over time, even by loose-knit groups and where the stakes are high, because organizations and other forms of law can help to stabilize cooperation. I also show that contract law is well suited to modes of information production that rely upon a “supply side” rather than “demand side” model. In its most important instances, “order without IP” is not order without governance, nor order without law. Recognizing this can help us better ground this new field, and better study and support forms of knowledge production that deserve our attention, and that sometimes sustain our very lives….(More)”.

Once Upon an Algorithm: How Stories Explain Computing


Book by Martin Erwig: “Picture a computer scientist, staring at a screen and clicking away frantically on a keyboard, hacking into a system, or perhaps developing an app. Now delete that picture. In Once Upon an Algorithm, Martin Erwig explains computation as something that takes place beyond electronic computers, and computer science as the study of systematic problem solving. Erwig points out that many daily activities involve problem solving. Getting up in the morning, for example: You get up, take a shower, get dressed, eat breakfast. This simple daily routine solves a recurring problem through a series of well-defined steps. In computer science, such a routine is called an algorithm.

Erwig illustrates a series of concepts in computing with examples from daily life and familiar stories. Hansel and Gretel, for example, execute an algorithm to get home from the forest. The movie Groundhog Day illustrates the problem of unsolvability; Sherlock Holmes manipulates data structures when solving a crime; the magic in Harry Potter’s world is understood through types and abstraction; and Indiana Jones demonstrates the complexity of searching. Along the way, Erwig also discusses representations and different ways to organize data; “intractable” problems; language, syntax, and ambiguity; control structures, loops, and the halting problem; different forms of recursion; and rules for finding errors in algorithms.

This engaging book explains computation accessibly and shows its relevance to daily life. Something to think about next time we execute the algorithm of getting up in the morning…(More)”.

Randomized Controlled Trials: How Can We Know ‘What Works’?


Nick Cowen et al at Critical Review: “We attempt to map the limits of evidence-based policy through an interactive theoretical critique and empirical case-study. We outline the emergence of an experimental turn in EBP among British policymakers and the limited, broadly inductive, epistemic approach that underlies it. We see whether and how field professionals identify and react to these limitations through a case study of teaching professionals subject to a push to integrate research evidence into their practice. Results suggest that many of the challenges of establishing evidential warrant that EBP is supposed to streamline re-appear at the level of choice of locally effective policies and implementation…(More)”.

External validity and policy adaptation. From impact evaluation to policy design


Paper by Martin J. Williams: “With the growing number of rigorous impact evaluations worldwide, the question of how best to apply this evidence to policymaking processes has arguably become the main challenge for evidence-based policymaking. How can policymakers predict whether a policy will have the same impact in their context as it did elsewhere, and how should this influence the design and implementation of policy? This paper introduces a simple and flexible framework to address these questions of external validity and policy adaptation. I show that failures of external validity arise from an interaction between a policy’s theory of change and a dimension of the context in which it is being implemented, and develop a method of “mechanism mapping” that maps a policy’s theory of change against salient contextual assumptions to identify external validity problems and suggest appropriate policy adaptations. In deciding whether and how to adapt a policy in a new context, I show there is a fundamental informational trade-o↵ between the strength and relevance of evidence on the policy from other contexts and the policymaker’s knowledge of the local context. This trade-o↵ can guide policymakers’ judgments about whether policies should be copied exactly from elsewhere, adapted, or invented anew….(More)”

Spotting the Patterns: 2017 Trends in Design Thinking


Andy Hagerman at Stanford Social Innovation Review: “Design thinking: It started as an academic theory in the 60’s, a notion of starting to look at broader types of challenges with the intention and creativity that designers use to tackle their work. It gained widespread traction as a product design process, has been integrated into culture change initiatives of some of the world’s most important organizations and governments, and has been taught in schools kindergarten to grad school. It’s been celebrated, criticized, merged with other methodologies, and modified for nearly every conceivable niche.

Regardless of what side of those perspectives you fall on, it’s undeniable that design thinking is continuing to grow and evolve. Looking across the social innovation landscape today, we see a few patterns that, taken together, suggest that social innovators continue to see great promise in design thinking. They are working to find ways to make it yield real performance gains for their organizations and clients.

From design thinking to design doing

Creative leaders have moved beyond increasing people’s awareness of design thinking to actively seeking concrete opportunities for using it. One of the principal drivers of this shift has been the need to demonstrate value and return on investment from design-thinking initiatives—something people have talked about for years. (Ever heard the question, “Is design thinking just the next fad?”) Social sector organizations, in particular, stand to benefit from the shift from design thinking to design doing. Timelines for getting things built in the social sector are often slow, due to legitimate constraints of responsibly doing impact work, as well as to legacy practices and politics. As long as organizations use design thinking responsibly and acknowledge the broader systems in which new ideas live, some of the emerging models can help them move projects along more quickly and gain greater stakeholder participation….

Building cultures around design thinking

As design thinking has proliferated, many organizational leaders have moved from replicating the design thinking programs of academic institutions like the Stanford d.School or foundational agencies like IDEO to adapting the methodology to their own goals, external environments, and organizational cultures.

One organization that has particularly inspired us is Beespace, a New York City-based social-impact foundation. Beespace has designed a two-year program that helps new organizations not only get off the ground, but also create the conditions for breakthrough innovation. To create this program, which combines deep thinking, impact assessment, and rapid prototyping, Beespace’s leadership asked itself what tools it would need, and came up with a mix that included not just design thinking, but also disciplines of behavioral science and systems thinking, and tools stemming from emotional intelligence and theory of change….

Empowering the few to shift the many

We have seen a lot of interest this year in “train the trainer” programs, particularly from organizations realizing the value of developing their internal capabilities to reduce reliance on outside consultants. Such development often entails focusing on the few people in the organization who are highly capable of instigating major change, as opposed to spreading awareness among the many. It takes time and resources, but the payoff is well worth it from both cultural and operational perspectives….(More)”.

Who governs or how they govern: Testing the impact of democracy, ideology and globalization on the well being of the poor


Eunyoung Ha and Nicholas L.Cain in The Social Science Journal: “This paper examines the effects of regime type, government ideology and economic globalization on poverty in low- and middle-income countries around the world. We use panel regression to estimate the effect of these explanatory variables on two different response variables: national poverty gap (104 countries from 1981 to 2005) and child mortality rate (132 countries from 1976 to 2005). We find consistent and significant results for the interactive effect of democracy and government ideology: strong leftist power under a democratic regime is associated with a reduction in both the poverty gap and the child mortality rate. Democracy, on its own, is associated with a lower child mortality rate, but has no effect on the poverty gap. Leftist power under a non-democratic regime is associated with an increase in both poverty measures. Trade reduces both measures of poverty. Foreign direct investment has a weak and positive effect on the poverty gap. From examining factors that influence the welfare of poor people in less developed countries, we conclude that who governs is as important as how they govern….

  • Our paper uses a unique dataset to study the impact of regime type, ideology and globalization on two measures of poverty.
  • We find that higher levels of democracy are associated with lower child mortality rates, but do not impact poverty gap.
  • The interaction of regime type and ideology has a strong effect: leftist power in a democracy reduces poverty and child mortality.
  • We find that trade significantly reduces both the poverty gap and the child mortality rate.
  • Overall, we find strong evidence that who governs is as important as how they govern…(More)”

“Nudge units” – where they came from and what they can do


Zeina Afif at the Worldbank: “You could say that the first one began in 2009, when the US government recruited Cass Sunstein to head The Office of Information and Regulatory Affairs (OIRA) to streamline regulations. In 2010, the UK established the first Behavioural Insights Unit (BIT) on a trial basis, under the Cabinet Office. Other countries followed suit, including the US, Australia, Canada, Netherlands, and Germany. Shortly after, countries such as India, Indonesia, Peru, Singapore, and many others started exploring the application of behavioral insights to their policies and programs. International institutions such as the World Bank, UN agencies, OECD, and EU have also established behavioral insights units to support their programs. And just this month, the Sustainable Energy Authority of Ireland launched its own Behavioral Economics Unit.

The Future
As eMBeD, the behavioral science unit at the World Bank, continues to support governments across the globe in the implementation of their units, here are some common questions we often get asked.

What are the models for a Behavioral Insights Unit in Government?
As of today, over a dozen countries have integrated behavioral insights with their operations. While there is not one model to prescribe, the setup varies from centralized or decentralized to networked….

In some countries, the units were first established at the ministerial level. One example is MineduLab in Peru, which was set up with eMBeD’s help. The unit works as an innovation lab, testing rigorous and leading research in education and behavioral science to address issues such as teacher absenteeism and motivation, parents’ engagement, and student performance….

What should be the structure of the team?
Most units start with two to four full-time staff. Profiles include policy advisors, social psychologists, experimental economists, and behavioral scientists. Experience in the public sector is essential to navigate the government and build support. It is also important to have staff familiar with designing and running experiments. Other important skills include psychology, social psychology, anthropology, design thinking, and marketing. While these skills are not always readily available in the public sector, it is important to note that all behavioral insights units partnered with academics and experts in the field.

The U.S. team, originally called the Social and Behavioral Sciences Team, is staffed mostly by seconded academic faculty, researchers, and other departmental staff. MineduLab in Peru partnered with leading experts, including the Abdul Latif Jameel Poverty Action Lab (J-PAL), Fortalecimiento de la Gestión de la Educación (FORGE), Innovations for Poverty Action (IPA), and the World Bank….(More)”

Federal Crowdsourcing and Citizen Science Catalog


About: “The catalog contains information about federal citizen science and crowdsourcing projects. In citizen science, the public participates voluntarily in the scientific process, addressing real-world problems in ways that may include formulating research questions, conducting scientific experiments, collecting and analyzing data, interpreting results, making new discoveries, developing technologies and applications, and solving complex problems. In crowdsourcing,organizations submit an open call for voluntary assistance from a group of individuals for online, distributed problem solving.

Projects in the catalog must meet the following criteria:

  • The project addresses societal needs or accelerates science, technology, and innovation consistent with a Federal agency’s mission.
  • Project outcomes include active management of data and data quality.
  • Participants serve as contributors, collaborators or co-creators in the project.
  • The project solicits engagement from individuals outside of a discipline’s or program’s traditional participants in the scientific enterprise.
  • Beyond practical limitations, the project does not seek to limit the number of participants or partners involved.
  • The project is opt-in; participants have full control over the extent that they participate.
  • The US Government enables or enhances the project via funding or providing an in-kind contribution. The US Government’s in-kind contribution to the project may be active or passive, formal or informal….(More)”.

Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence


Dom Galeon in Futurism: “As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?

The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.

In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios….

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions….(More)”.

Where’s the evidence? Obstacles to impact-gathering and how researchers might be better supported in future


Clare Wilkinson at the LSE Impact Blog: “…In a recent case study I explore how researchers from a broad range of research areas think about evidencing impact, what obstacles to impact-gathering might stand in their way, and how they might be further supported in future.

Unsurprisingly the research found myriad potential barriers to gathering research impact, such as uncertainty over how impact is defined, captured, judged, and weighted, or the challenges for researchers in tracing impact back to a specific time-period or individual piece of research. Many of these constraints have been recognised in previous research in this area – or were anticipated when impact was first discussed – but talking to researchers in 2015 about their impact experiences of the REF 2014 data-gathering period revealed a number of lingering concerns.

A further hazard identified by the case study is the inequalities in knowledge around research impact and how this knowledge often exists in siloes. Those researchers most likely to have obvious impact-generating activities were developing quite detailed and extensive experience of impact-capturing; while other researchers (including those at early-career stages) were less clear on the impact agenda’s relevance to them or even whether their research had featured in an impact case study. Encouragingly some researchers did seem to increase in confidence once having experience of authoring an impact case study, but sharing skills and confidence with the “next generation” of researchers likely to have impact remains a possible issue for those supporting impact evidence-gathering.

So, how can researchers, across the board, be supported to effectively evidence their impact? Most popular amongst the options given to the 70 or so researchers that participated in this case study were: 1) approaches that offered them more time or funding to gather evidence; 2) opportunities to see best-practice examples; 3) opportunities to learn more about what “impact” means; and 4) the sharing of information on the types of evidence that could be collected….(More)”.