To Regain Policy Competence: The Software of American Public Problem-Solving


Philip Zelikow at the Texas National Security Review: “Policymaking is a discipline, a craft, and a profession. Policymakers apply specialized knowledge — about other countries, politics, diplomacy, conflict, economics, public health, and more — to the practical solution of public problems. Effective policymaking is difficult. The “hardware” of policymaking — the tools and structures of government that frame the possibilities for useful work — are obviously important. Less obvious is that policy performance in practice often rests more on the “software” of public problem-solving: the way people size up problems, design actions, and implement policy. In other words, the quality of the policymaking.

Like policymaking, engineering is a discipline, a craft, and a profession. Engineers learn how to apply specialized knowledge — about chemistry, physics, biology, hydraulics, electricity, and more — to the solution of practical problems. Effective engineering is similarly difficult. People work hard to learn how to practice it with professional skill. But, unlike the methods taught for engineering, the software of policy work is rarely recognized or studied. It is not adequately taught. There is no canon or norms of professional practice. American policymaking is less about deliberate engineering, and is more about improvised guesswork and bureaucratized habits.

My experience is as a historian who studies the details of policy episodes and the related staff work, but also as a former official who has analyzed a variety of domestic and foreign policy issues at all three levels of American government, including federal work from different bureaucratic perspectives in five presidential administrations from Ronald Reagan to Barack Obama. From this historical and contemporary vantage point, I am struck (and a bit depressed) that the quality of U.S. policy engineering is actually much, much worse in recent decades than it was throughout much of the 20th century. This is not a partisan observation — the decline spans both Republican and Democratic administrations.

I am not alone in my observations. Francis Fukuyama recently concluded that, “[T]he overall quality of the American government has been deteriorating steadily for more than a generation,” notably since the 1970s. In the United States, “the apparently irreversible increase in the scope of government has masked a large decay in its quality.”1 This worried assessment is echoed by other nonpartisan and longtime scholars who have studied the workings of American government.2 The 2003 National Commission on Public Service observed,

The notion of public service, once a noble calling proudly pursued by the most talented Americans of every generation, draws an indifferent response from today’s young people and repels many of the country’s leading private citizens. … The system has evolved not by plan or considered analysis but by accretion over time, politically inspired tinkering, and neglect. … The need to improve performance is urgent and compelling.3

And they wrote that as the American occupation of Iraq was just beginning.

In this article, I offer hypotheses to help explain why American policymaking has declined, and why it was so much more effective in the mid-20th century than it is today. I offer a brief sketch of how American education about policy work evolved over the past hundred years, and I argue that the key software qualities that made for effective policy engineering neither came out of the academy nor migrated back into it.

I then outline a template for doing and teaching policy engineering. I break the engineering methods down into three interacting sets of analytical judgments: about assessment, design, and implementation. In teaching, I lean away from new, cumbersome standalone degree programs and toward more flexible forms of education that can pair more easily with many subject-matter specializations. I emphasize the value of practicing methods in detailed and more lifelike case studies. I stress the significance of an organizational culture that prizes written staff work of the quality that used to be routine but has now degraded into bureaucratic or opinionated dross….(More)”.

This Is Not an Atlas.


Book by kollektiv orangotango: “This Is Not an Atlas gathers more than 40 counter-cartographies from all over the world. This collection shows how maps are created and transformed as a part of political struggle, for critical research or in art and education: from indigenous territories in the Amazon to the anti-eviction movement in San Francisco; from defending commons in Mexico to mapping refugee camps with balloons in Lebanon; from slums in Nairobi to squats in Berlin; from supporting communities in the Philippines to reporting sexual harassment in Cairo. This Is Not an Atlas seeks to inspire, to document the underrepresented, and to be a useful companion when becoming a counter-cartographer yourself….(More)”.

Stop Surveillance Humanitarianism


Mark Latonero at The New York Times: “A standoff between the United Nations World Food Program and Houthi rebels in control of the capital region is threatening the lives of hundreds of thousands of civilians in Yemen.

Alarmed by reports that food is being diverted to support the rebels, the aid program is demanding that Houthi officials allow them to deploy biometric technologies like iris scans and digital fingerprints to monitor suspected fraud during food distribution.

The Houthis have reportedly blocked food delivery, painting the biometric effort as an intelligence operation, and have demanded access to the personal data on beneficiaries of the aid. The impasse led the aid organization to the decision last month to suspend food aid to parts of the starving population — once thought of as a last resort — unless the Houthis allow biometrics.

With program officials saying their staff is prevented from doing its essential jobs, turning to a technological solution is tempting. But biometrics deployed in crises can lead to a form of surveillance humanitarianism that can exacerbate risks to privacy and security.

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need….(More)”.

New App Uses Crowdsourcing to Find You an EpiPen in an Emergency


Article by Shaunacy Ferro: “Many people at risk for severe allergic reactions to things like peanuts and bee stings carry EpiPens. These tools inject the medication epinephrine into one’s bloodstream to control immune responses immediately. But exposure can turn into life-threatening situations in a flash: Without EpiPens, people could suffer anaphylactic shock in less than 15 minutes as they wait for an ambulance. Being without an EpiPen or other auto-injector can have deadly consequences.

EPIMADA, a new app created by researchers at Israel’s Bar-Ilan University, is designed to save the lives of people who go into anaphylactic shock when they don’t have EpiPens handy. The app uses the same type of algorithms that ride-hailing services use to match drivers and riders by location—in this case, EPIMADA matches people in distress with nearby strangers carrying EpiPens. David Schwartz, director of the university’s Social Intelligence Lab and one of the app’s co-creators, told The Jerusalem Post that the app currently has hundreds of users….

EPIMADA serves as a way to crowdsource medication from fellow patients who might be close by and able to help. While it may seem unlikely that people would rush to give up their own expensive life-saving tool for a stranger, EPIMADA co-creator Michal Gaziel Yablowitz, a doctoral student in the Social Intelligence Lab, explained in a press release that “preliminary research results show that allergy patients are highly motivated to give their personal EpiPen to patient-peers in immediate need.”…(More)”.

How an AI Utopia Would Work


Sami Mahroum at Project Syndicate: “…It is more than 500 years since Sir Thomas More found inspiration for the “Kingdom of Utopia” while strolling the streets of Antwerp. So, when I traveled there from Dubai in May to speak about artificial intelligence (AI), I couldn’t help but draw parallels to Raphael Hythloday, the character in Utopia who regales sixteenth-century Englanders with tales of a better world.

As home to the world’s first Minister of AI, as well as museumsacademies, and foundations dedicated to studying the future, Dubai is on its own Hythloday-esque voyage. Whereas Europe, in general, has grown increasingly anxious about technological threats to employment, the United Arab Emirates has enthusiastically embraced the labor-saving potential of AI and automation.

There are practical reasons for this. The ratio of indigenous-to-foreign labor in the Gulf states is highly imbalanced, ranging from a high of 67% in Saudi Arabia to a low of 11% in the UAE. And because the region’s desert environment cannot support further population growth, the prospect of replacing people with machines has become increasingly attractive.

But there is also a deeper cultural difference between the two regions. Unlike Western Europe, the birthplace of both the Industrial Revolution and the “Protestant work ethic,” Arab societies generally do not “live to work,” but rather “work to live,” placing a greater value on leisure time. Such attitudes are not particularly compatible with economic systems that require squeezing ever more productivity out of labor, but they are well suited for an age of AI and automation….

Fortunately, AI and data-driven innovation could offer a way forward. In what could be perceived as a kind of AI utopia, the paradox of a bigger state with a smaller budget could be reconciled, because the government would have the tools to expand public goods and services at a very small cost.

The biggest hurdle would be cultural: As early as 1948, the German philosopher Joseph Pieper warned against the “proletarianization” of people and called for leisure to be the basis for culture. Westerners would have to abandon their obsession with the work ethic, as well as their deep-seated resentment toward “free riders.” They would have to start differentiating between work that is necessary for a dignified existence, and work that is geared toward amassing wealth and achieving status. The former could potentially be all but eliminated.

With the right mindset, all societies could start to forge a new AI-driven social contract, wherein the state would capture a larger share of the return on assets, and distribute the surplus generated by AI and automation to residents. Publicly-owned machines would produce a wide range of goods and services, from generic drugs, food, clothes, and housing, to basic research, security, and transportation….(More)”.

Open Verification


Article by Eyal Weizman: “More than a decade ago, I would have found the idea of a forensic institute to be rather abhorrent. Coming from the field of left activism and critical spatial practice, I felt instinctively oriented against the authority of established truths. Forensics relies on technical expertise in normative and legal frameworks, and smacks full of institutional authority. It is, after all, one of the fundamental arts of the state, the privilege of its agencies: the police, the secret services, or the military. Today, counter-intuitively perhaps, I find myself running Forensic Architecture, a group of architects, filmmakers, coders, and journalists which operates as a forensic agency and makes evidence public in different forums such as the media, courts, truth commissions, and cultural venues.

This reorientation of my thought practice was a response to changes in the texture of our present and to the nature of contemporary conflict. An evolving information and media environment enables authoritarian states to manipulate and distort facts about their crimes, but it also offers new techniques with which civil society groups can invert the forensic gaze and monitor them. This is what we call counter-forensics.

We do not yet have a satisfactory name for the new reactionary forces—a combination of digital racism, ultra-nationalism, self-victimhood, and conspiracism—that have taken hold across the world and become manifest in countries such as Russia, Poland, Hungary, Britain, Italy, Brazil, the US, and Israel, where I most closely experienced them. These forces have made the obscuring, blurring, manipulation, and distortion of facts their trademark. Whatever form of reality-denial “post truth” is, it is not simply about lying. Lying in politics is sometimes necessary. Deception, after all, has always been part of the toolbox of statecraft, and there might not be more of it now than in previous times.  The defining characteristics of our era might thus not be an extraordinary dissemination of untruths, but rather, ongoing attacks against the institutional authorities that buttress facts: government experts, universities, science laboratories, mainstream media, and the judiciary.

Because questioning the authority of state institutions is also what counter-forensics is about—we seek to expose police and military cover-ups, government lies, and instances in which the legal system has been aligned against state victims—we must distinguish it from the tactics of those political forces mentioned above.

Dark Epistemology

While “post truth” is a seemingly new phenomenon, for those working to expose state crimes at the frontiers of contemporary conflicts, it has long been the constant condition of our work. As a set of operations, this form of denial compounds the traditional roles of propaganda and censorship. It is propaganda because it is concerned with statements released by states to affect the thoughts and conducts of publics. It is not the traditional form of propaganda though, framed in the context of a confrontation between blocs and ideologies. It does not aim to persuade or tell you anything, nor does it seek to promote the assumed merits of one system over the other—equality vs. freedom or east vs. west—but rather to blur perception so that nobody knows what is real anymore. The aim is that when people no longer know what to think, how to establish facts, or when to trust them, those in power can fill this void by whatever they want to fill it with.

“Post truth” also functions as a new form of censorship because it blocks one’s ability to evaluate and debate facts. In the face of governments’ increasing difficulties in cutting data out of circulation and in suppressing political discourse, it adds rather than subtracts, augmenting the level of noise in a deliberate maneuver to divert attention….(More)”.

Public Entrepreneurship: How to train 21st century leaders


Beth Noveck at apolitical: “So how do we develop these better ways of working in government? How do we create a more effective public service?

Governments, universities and philanthropies are beginning to invest in training those inside and outside of government in new kinds of public entrepreneurial skills. They are also innovating in how they teach.

Canada has created a new Digital Academy to teach digital literacy to all 250,000 public servants. Among other approaches, they have created a 15 minute podcast series called bus rides to enable public servants to learn on their commute.

The better programs, like Canada’s, combine online and face-to-face methods. This is what Israel does in its Digital Leaders program. This nine-month program alternates between web- and live meetings as well as connecting learners to a global, online network of digital innovators.

Many countries have started to teach human-centred design to public servants, instructing officials in how to design services with, not simply for the public, as WeGov does in Brazil. in Chile, the UAI University has just begun teaching quantitative skills, offering three day intensives in data science for public servants.

The GovLab also offers a nifty, free online program called Solving Public Problems with Data.

The Public sector learning

To ensure that learning translates into practice, Australia’s BizLab Academy, turns students into teachers by using alumni of their human-centred design training as mentors for new students.

The Cities of Orlando and Sao Paulo go beyond training public servantsOrlando includes members of the public in its training program for city officials. Because they are learning to redesign services with citizens, the public participates in the training.

The Sao Paulo Abierta program uses citizens as trainers for the city’s public servants. Over 23,000 of them have studied with these lay trainers, who possess the innovation skills that are in short supply in government. In fact, public officials are prohibited from teaching in the program altogether.

Image from the ten recommendations for training public entrepreneurs. Read all the recommendations here. 

Recognising that it is not enough to train only a lone innovator or data scientist in a unit, governments are scaling their programs across the public sector.

Argentina’s LabGob has already trained 30,000 people since 2016 in its Design Academy for Public Policy with plans to expand. For every class taken, a public servant earns points, which are a prerequisite for promotions and pay raises in the Argentinian civil service.

Rather than going broad, some training programs are going deep by teaching sector-specific innovation skills. The NHS Digital Academy done in collaboration with Imperial College is a series of six online and four live sessions designed to produce leaders in health innovation.

Innovating in a bureaucracy

In my own work at the GovLab at New York University, we are helping public entrepreneurs take their public interest projects from idea to implementation using coaching, rather than training.

Training classes may be wonderful but leave people feeling abandoned when they return to their desks to face the challenge of innovating within a bureaucracy.

With hands-on mentoring from global leaders and peer-to-peer support, the GovLab Academycoaching programs try to ensure that public servants are getting the help they need to advance innovative projects.

Knowing what innovation skills to teach and how to teach them, however, should depend on asking people what they want. That’s why the Australia New Zealand School of Government is administering a survey asking these questions for public servants there….(More)”.

Virtuous and vicious circles in the data life-cycle


Paper by Elizabeth Yakel, Ixchel M. Faniel, and Zachary J. Maiorana: “In June 2014, ‘Data sharing reveals complexity in the westward spread of domestic animals across Neolithic Turkey’, was published in PLoS One (Arbuckle et al. 2014). In this article, twenty-three authors, all zooarchaeologists, representing seventeen different archaeological sites in Turkey investigated the domestication of animals across Neolithic southwest Asia, a pivotal era of change in the region’s economy. The PLoS One article originated in a unique data sharing, curation, and reuse project in which a majority of the authors agreed to share their data and perform analyses across the aggregated datasets. The extent of data sharing and the breadth of data reuse and collaboration were previously unprecedented in archaeology. In the present article, we conduct a case study of the collaboration leading to the development of the PLoS One article. In particular, we focus on the data sharing, data curation, and data reuse practices exercised during the project in order to investigate how different phases in the data life-cycle affected each other.

Studies of data practices have generally engaged issues from the singular perspective of data producers, sharers, curators, or reusers. Furthermore, past studies have tended to focus on one aspect of the life-cycle (production, sharing, curation, reuse, etc.). A notable exception is Carlson and Anderson’s (2007) comparative case study of four research projects which discusses the life-cycle of data from production through sharing with an eye towards reuse. However, that study primarily addresses the process of data sharing. While we see from their research that data producers’ and curators’ decisions and actions regarding data are tightly coupled and have future consequences, those consequences are not fully explicated since the authors do not discuss reuse in depth.

Taking a perspective that captures the trajectory of data, our case study discusses actions and their consequences throughout the data life-cycle. Our research theme explores how different stakeholders and their work practices positively and/or negatively affected other phases of the life-cycle. More specifically, we focus on data production practices and data selection decisions made during data sharing as these have frequent and diverse consequences for other life-cycle phases in our case study. We address the following research questions:

  1. How do different aspects of data production positively and negatively impact other phases in the life-cycle?
  2. How do data selection decisions during sharing positively and negatively impact other phases in the life-cycle?
  3. How can the work of data curators intervene to reinforce positive actions or mitigate negative actions?…(More)”

Bringing Truth to the Internet


Article by Karen Kornbluh and Ellen P. Goodman: “The first volume of Special Counsel Robert Mueller’s report notes that “sweeping” and “systemic” social media disinformation was a key element of Russian interference in the 2016 election. No sooner were Mueller’s findings public than Twitter suspended a host of bots who had been promoting a “Russiagate hoax.”

Since at least 2016, conspiracy theories like Pizzagate and QAnon have flourished online and bled into mainstream debate. Earlier this year, a British member of Parliament called social media companies “accessories to radicalization” for their role in hosting and amplifying radical hate groups after the New Zealand mosque shooter cited and attempted to fuel more of these groups. In Myanmar, anti-Rohingya forces used Facebook to spread rumors that spurred ethnic cleansing, according to a UN special rapporteur. These platforms are vulnerable to those who aim to prey on intolerance, peer pressure, and social disaffection. Our democracies are being compromised. They work only if the information ecosystem has integrity—if it privileges truth and channels difference into nonviolent discourse. But the ecosystem is increasingly polluted.

Around the world, a growing sense of urgency about the need to address online radicalization is leading countries to embrace ever more draconian solutions: After the Easter bombings in Sri Lanka, the government shut down access to Facebook, WhatsApp, and other social media platforms. And a number of countries are considering adopting laws requiring social media companies to remove unlawful hate speech or face hefty penalties. According to Freedom House, “In the past year, at least 17 countries approved or proposed laws that would restrict online media in the name of fighting ‘fake news’ and online manipulation.”

The flaw with these censorious remedies is this: They focus on the content that the user sees—hate speech, violent videos, conspiracy theories—and not on the structural characteristics of social media design that create vulnerabilities. Content moderation requirements that cannot scale are not only doomed to be ineffective exercises in whack-a-mole, but they also create free expression concerns, by turning either governments or platforms into arbiters of acceptable speech. In some countries, such as Saudi Arabia, content moderation has become justification for shutting down dissident speech.

When countries pressure platforms to root out vaguely defined harmful content and disregard the design vulnerabilities that promote that content’s amplification, they are treating a symptom and ignoring the disease. The question isn’t “How do we moderate?” Instead, it is “How do we promote design change that optimizes for citizen control, transparency, and privacy online?”—exactly the values that the early Internet promised to embody….(More)”.

Africa must reap the benefits of its own data


Tshilidzi Marwala at Business Insider: “Twenty-two years ago when I was a doctoral student in artificial intelligence (AI) at the University of Cambridge, I had to create all the AI algorithms I needed to understand the complex phenomena related to this field.

For starters, AI is a computer software that performs intelligent tasks that normally require human beings, while an algorithm is a set of rules that instruct a computer to execute specific tasks. In that era, the ability to create AI algorithms was more important than the ability to acquire and use data.

Google has created an open-source library called TensorFlow, which contains all the developed AI algorithms. This way Google wants people to develop applications (apps) using their software, with the payoff being that Google will collect data on any individual using the apps developed with TensorFlow.

Today, an AI algorithm is not a competitive advantage but data is. The World Economic Forum calls data the new “oxygen”, while Chinese AI specialist Kai-Fu Lee calls it the new “oil”.

Africa’s population is increasing faster than in any region in the world. The continent has a population of 1.3-billion people and a total nominal GDP of $2.3-trillion. This increase in the population is in effect an increase in data, and if data is the new oil, it is akin to an increase in oil reserve.

Even oil-rich countries such as Saudi Arabia do not experience an increase in their oil reserve. How do we as Africans take advantage of this huge amount of data?

There are two categories of data in Africa: heritage and personal. Heritage data resides in society, whereas personal data resides in individuals. Heritage data includes data gathered from our languages, emotions and accents. Personal data includes health, facial and fingerprint data.

Facebook, Amazon, Apple, Netflix and Google are data companies. They trade data to advertisers, banks and political parties, among others. For example, the controversial company Cambridge Analytica harvested Facebook data to influence the presidential election that potentially contributed to Donald Trump’s victory in the US elections.

The company Google collects language data to build an application called Google Translate that translates from one language to another. This app claims to cover African languages such as Zulu, Yoruba and Swahili. Google Translate is less effective in handling African languages than it is in handling European and Asian languages.

Now, how do we capitalise on our language heritage to create economic value? We need to build our own language database and create our own versions of Google Translate.

An important area is the creation of an African emotion database. Different cultures exhibit emotions differently. These are very important in areas such as safety of cars and aeroplanes. If we can build a system that can read pilots’ emotions, this would enable us to establish if a pilot is in a good state of mind to operate an aircraft, which would increase safety.

To capitalise on the African emotion database, we should create a data bank that captures emotions of African people in various parts of the continent, and then use this database to create AI apps to read people’s emotions. Mercedes-Benz has already implemented the “Attention Assist”, which alerts drivers to fatigue.

Another important area is the creation of an African health database. AI algorithms are able to diagnose diseases better than human doctors. However, these algorithms depend on the availability of data. To capitalise on this, we need to collect such data and use it to build algorithms that will be able to augment medical care….(More)”.