Lessons Learned for New Office of Innovation


Blog by Catherine Tkachyk: “I have worked in a government innovation office for the last eight years in four different roles and two different communities.  In that time, I’ve had numerous conversations on what works and doesn’t work for innovation in local government.  Here’s what I’ve learned: starting an innovation office in government is hard.  That is not a complaint, I love the work I do, but it comes with its own challenges.  When you think about many of the services government provides: Police; Fire; Health and Human Services; Information Technology; Human Resources; Finance; etc. very few people question whether government should provide those services.  They may question how they are provided, who is providing them, or how much they cost, but they don’t question the service.  That’s not true for innovation offices.  One of the first questions I can get from people when they hear what I do is, “Why does government need an Office of Innovation.”  My first answer is, “Do you like how government works?  If not, then maybe there should be a group of people focused on fixing it.” 

Over my career, I have come across a few lessons on how to start up an innovation office to give you the best chance for success. Some of these lessons come from listening to others, but many (probably too many) come from my own mistakes….(More)”.

Three Eras of Digital Governance


Paper by Jonathan L. Zittrain: “To understand where digital governance is going, we must take stock of where it’s been, because the timbre of mainstream thinking around digital governance today is dramatically different than it was when study of “Internet governance” coalesced in the late 1990s.

Perhaps the most obvious change has been from emphasizing networked technologies’ positive effects and promise – couched around concepts like connectivity, innovation, and, by this author, “generativity” – to pointing out their harms and threats. It’s not that threats weren’t previously recognized, but rather that they were more often seen in external clamps on technological development and upon the corresponding new freedoms for users, whether government intervention to block VOIP services like Skype to protect incumbent telco revenues, or in the shaping of technology to effect undue surveillance, whether for government or corporate purposes.

The shift in emphasis from positive to negative corresponds to a change in the overarching frameworks for talking about regulating information technology. We have moved from a discourse around rights – particularly those of end-users, and the ways in which abstention by intermediaries is important to facilitate citizen flourishing – to one of public health, which naturally asks for a weighing of the systemic benefits or harms of a technology, and to think about what systemic interventions might curtail its apparent excesses.

Each framework captures important values around the use of technology that can both empower and limit individual freedom of action, including to engage in harmful conduct. Our goal today should be to identify where competing values frameworks themselves preclude understanding of others’ positions about regulation, and to see if we can map a path forward that, if not reconciling the frameworks, allows for satisfying, if ever-evolving, resolutions to immediate questions of public and private governance…(More)”.

Five Ethical Principles for Humanitarian Innovation


Peter Batali, Ajoma Christopher & Katie Drew in the Stanford Social Innovation Review: “…Based on this experience, UNHCR and CTEN developed a pragmatic, refugee-led, “good enough” approach to experimentation in humanitarian contexts. We believe a wide range of organizations, including grassroots community organizations and big-tech multinationals, can apply this approach to ensure that the people they aim to help hold the reigns of the experimentation process.

1. Collaborate Authentically and Build Intentional Partnerships

Resource and information asymmetry are inherent in the humanitarian system. Refugees have long been constructed as “‘victims”’ in humanitarian response, waiting for “salvation” from heroic humanitarians. Researcher Matthew Zagor describes this construct as follows: “The genuine refugee … is the passive, coerced, patient refugee, the one waiting in the queue—the victim, anticipating our redemptive touch, defined by the very passivity which in our gaze both dehumanizes them, in that they lack all autonomy in our eyes, and romanticizes them as worthy in their potentiality.”

Such power dynamics make authentic collaboration challenging….

2. Avoid Technocratic Language

Communication can divide us or bring us together. Using exclusive or “expert” terminology (terms like “ideation,” “accelerator,” and “design thinking”) or language that reinforces power dynamics or assigns an outsider role (such as “experimenting on”) can alienate community participants. Organizations should aim to use inclusive language than everyone understands, as well as set a positive and realistic tone. Communication should focus on the need to co-develop solutions with the community, and the role that testing or trying something new can play….

3. Don’t Assume Caution Is Best

Research tells us that we feel more regret over actions that lead to negative outcomes than we do over inactions that lead to the same or worse outcomes. As a result, we tend to perceive and weigh action and inaction unequally. So while humanitarian organizations frequently consider the implications of our actions and the possible negative outcome for communities, we don’t always consider the implications of doing nothing. Is it ethical to continue an activity that we know isn’t as effective as it could be, when testing small and learning fast could reap real benefits? In some cases, taking a risk might, in fact, be the least risky path of action. We need to always ask ourselves, “Is it really ethical to do nothing?”…

4. Choose Experiment Participants Based on Values

Many humanitarian efforts identify participants based on their societal role, vulnerability, or other selection criteria. However, these methods often lead to challenges related to incentivization—the need to provide things like tea, transportation, or cash payments to keep participants engaged. Organizations should instead consider identifying participants who demonstrate the values they hope to promote—such as collaboration, transparency, inclusivity, or curiosity. These community members are well-poised to promote inclusivity, model positive behaviors, and engage participants across the diversity of your community….

5. Monitor Community Feedback and Adapt

While most humanitarian agencies know they need to listen and adapt after establishing communication channels, the process remains notoriously challenging. One reason is that community members don’t always share their feedback on experimentation formally; feedback sometimes comes from informal channels or even rumors. Yet consistent, real-time feedback is essential to experimentation. Listening is the pressure valve in humanitarian experimentation; it allows organizations to adjust or stop an experiment if the community flags a negative outcome….(More)”.

The Narrow Corridor


Book by Daron Acemoglu and James A. Robinson: “…In their new book, they build a new theory about liberty and how to achieve it, drawing a wealth of evidence from both current affairs and disparate threads of world history.  

Liberty is hardly the “natural” order of things. In most places and at most times, the strong have dominated the weak and human freedom has been quashed by force or by customs and norms. Either states have been too weak to protect individuals from these threats, or states have been too strong for people to protect themselves from despotism. Liberty emerges only when a delicate and precarious balance is struck between state and society.

There is a Western myth that political liberty is a durable construct, arrived at by a process of “enlightenment.” This static view is a fantasy, the authors argue. In reality, the corridor to liberty is narrow and stays open only via a fundamental and incessant struggle between state and society: The authors look to the American Civil Rights Movement, Europe’s early and recent history, the Zapotec civilization circa 500 BCE, and Lagos’s efforts to uproot corruption and institute government accountability to illustrate what it takes to get and stay in the corridor. But they also examine Chinese imperial history, colonialism in the Pacific, India’s caste system, Saudi Arabia’s suffocating cage of norms, and the “Paper Leviathan” of many Latin American and African nations to show how countries can drift away from it, and explain the feedback loops that make liberty harder to achieve. 

Today we are in the midst of a time of wrenching destabilization. We need liberty more than ever, and yet the corridor to liberty is becoming narrower and more treacherous. The danger on the horizon is not “just” the loss of our political freedom, however grim that is in itself; it is also the disintegration of the prosperity and safety that critically depend on liberty. The opposite of the corridor of liberty is the road to ruin….(More)”.

We Need a PBS for Social Media


Mark Coatney at the New York Times: “Social media is an opportunity wrapped in a problem. YouTube spreads propaganda and is toxic to children. Twitter spreads propaganda and is toxic to racial relationsFacebook spreads propaganda and is toxic to democracy itself.

Such problems aren’t surprising when you consider that all these companies operate on the same basic model: Create a product that maximizes the attention you can command from a person, collect as much data as you can about that person, and sell it.

Proposed solutions like breaking up companies and imposing regulation have been met with resistance: The platforms, understandably, worry that their profits might be reduced from staggering to merely amazing. And this may not be the best course of action anyway.

What if the problem is something that can’t be solved by existing for-profit media platforms? Maybe the answer to fixing social media isn’t trying to change companies with business models built around products that hijack our attention, and instead work to create a less toxic alternative.

Nonprofit public media is part of the answer. More than 50 years ago, President Lyndon Johnson signed the Public Broadcasting Act, committing federal funds to create public television and radio that would “be responsive to the interests of people.”

It isn’t a big leap to expand “public media” to include not just television and radio but also social media. In 2019, the definition of “media” is considerably larger than it was in 1967. Commentary on Twitter, memes on Instagram and performances on TikTok are all as much a part of the media landscape today as newspapers and television news.

Public media came out of a recognition that the broadcasting spectrum is a finite resource. TV broadcasters given licenses to use the spectrum were expected to provide programming like news and educational shows in return. But that was not enough. To make sure that some of that finite resource would always be used in the public interest, Congress established public media.

Today, the limited resource isn’t the spectrum — it’s our attention….(More)”.

Digital Media and Wireless Communication in Developing Nations: Agriculture, Education, and the Economic Sector


Book by Megh R. Goyal and Emmanuel Eilu: “… explores how digital media and wireless communication, especially mobile phones and social media platforms, offer concrete opportunities for developing countries to transform different sectors of their economies. The volume focuses on the agricultural, economic, and education sectors. The chapter authors, mostly from Africa and India, provide a wealth of information on recent innovations, the opportunities they provide, challenges faced, and the direction of future research in digital media and wireless communication to leverage transformation in developing countries….(More)”.

The Church of Techno-Optimism


Margaret O’Mara at the New York Times: “…But Silicon Valley does have a politics. It is neither liberal nor conservative. Nor is it libertarian, despite the dog-eared copies of Ayn Rand’s novels that you might find strewn about the cubicles of a start-up in Palo Alto.

It is techno-optimism: the belief that technology and technologists are building the future and that the rest of the world, including government, needs to catch up. And this creed burns brightly, undimmed by the anti-tech backlash. “It’s now up to all of us together to harness this tremendous energy to benefit all humanity,” the venture capitalist Frank Chen said in a November 2018 speech about artificial intelligence. “We are going to build a road to space,” Jeff Bezos declared as he unveiled plans for a lunar lander last spring. And as Elon Musk recently asked his Tesla shareholders, “Would I be doing this if I weren’t optimistic?”

But this is about more than just Silicon Valley. Techno-optimism has deep roots in American political culture, and its belief in American ingenuity and technological progress. Reckoning with that history is crucial to the discussion about how to rein in Big Tech’s seemingly limitless power.

The language of techno-optimism first appears in the rhetoric of American politics after World War II. “Science, the Endless Frontier” was the title of the soaringly techno-optimistic 1945 report by Vannevar Bush, the chief science adviser to Franklin Roosevelt and Harry Truman, which set in motion the American government’s unprecedented postwar spending on research and development. That wave of money transformed the Santa Clara Valley and turned Stanford University into an engineering powerhouse. Dwight Eisenhower filled the White House with advisers whom he called “my scientists.” John Kennedy, announcing America’s moon shot in 1962, declared that “man, in his quest for knowledge and progress, is determined and cannot be deterred.”

In a 1963 speech, a founder of Hewlett-Packard, David Packard, looked back on his life during the Depression and marveled at the world that he lived in, giving much of the credit to technological innovation unhindered by bureaucratic interference: “Radio, television, Teletype, the vast array of publications of all types bring to a majority of the people everywhere in the world information in considerable detail, about what is going on everywhere else. Horizons are opened up, new aspirations are generated.”…(More)”

Social Systems Evidence


Social Systems Evidence is the world’s most comprehensive, continuously updated repository of syntheses of research evidence about the programs, services and products available in a broad range of government sectors and program areas (e.g., climate action, community and social services, economic development and growth, education, environmental conservation, education, housing and transportation) as well as the governance, financial and delivery arrangements within which these programs, services and products are provided, and the implementation strategies that can help to ensure that these programs, services and products get to those who need them. The content contained in Social Systems Evidence covers the Sustainable Development Goals, with the exceptions of the health part of goal 3 (which is already well covered by databases such as ACCESSSS for clinical evidence, Health Evidence for public health evidence, and Health Systems Evidence for the governance, financial and delivery arrangements, and the implementation strategies that determine whether the right programs, services and products get to those who need them).

The types of syntheses in Social Systems Evidence include evidence briefs for policy, overviews of systematic reviews, systematic reviews, systematic reviews in progress (i.e. protocols for systematic reviews), and systematic reviews being planned (i.e. registered titles for systematic reviews). Social Systems Evidence also contains a continuously updated repository of economic evaluations in these same domains.

Documents included in Social Systems Evidence are identified through weekly electronic searches of online bibliographic databases (EBSCOhost, ProQuest and Web of Science) and through manual searches of the websites of high-volume producers of research syntheses relevant to social-system program and service areas (see acknowledgements below).

For all types of documents, Social Systems Evidence provides links to user-friendly summaries, scientific abstracts, and full-text reports (if applicable and when freely available). For each systematic review, Social Systems Evidence also provides an assessment of its methodological quality, and links to the studies contained in the review.

While SSE is free to use and does not require that users have an account, creating an account will allow you to view more than 20 search results, to save documents and searches, and to subscribe to email alerts, among other advanced features. You can create an account by clicking ‘Create account’ on the top banner (for desktop and laptop computers) or in the menu on far right of the banner (for mobile devices).

Social Systems Evidence can save social-system policymakers and stakeholders a great deal of time by helping them to rapidly identify: a synthesis of the best available research evidence on a given topic that has been prepared in a systematic and transparent way, how recently the search for studies was conducted, the quality of the synthesis, the countries in which the studies included in the synthesis were conducted, and the key findings from the synthesis. Social Systems Evidence can also help them to rapidly identify economic evaluations in these same domains…(More)”.

Rational Democracy: a critical analysis of digital democracy in the light of rational choice institutionalism


Paper by Ricardo Zapata Lopera: “Since its beginnings, digital technologies have increased the enthusiasm for the realisation of political utopias about a society capable of achieving self-organisation and decentralised governance. The vision was initially brought to concrete technological developments in mid-century with the surge of cybernetics and the attempt to automatise public processes for a more efficient State, taking its most practical form with the Cybersyn Project between 1971-73. Contemporary developments of governance technologies have learned and leveraged particularly from the internet, the free software movement and the increasing micro-processing capacity to come up with more efficient solutions for collective decision-making, preserving, in most cases, the same ethos of “algorithmic regulation”. This essay examines how rational choice institutionalism has framed the scope of digital democracy, and how recent supporting technologies like blockchain have made more evident the objective of creating new institutional arrangements to overcome market failures and increasing inequality, without questioning the utility-maximisation logic. This rational logic of governance could explain the paradoxical movements towards centralisation and power concentration experienced by some of these technologies.

Digital democracy will be understood as a heterogeneous field that explores how digital tools and technologies are used in the practice of democracy (Simon, Bass & Mulgan, 2017). Its understanding needs to go in hand however with the use of supporting technologies and practices that amplify the role of the people in the public decision-making process, either by decentralisation (of public goods) or aggregation (of opinions), including blockchain, data processing (open data and big data), open government, and recent developments in civic tech (Knight Foundation, 2013). It must be noted that the use of digital democracy as a category to describe the use of these technologies to support democratic processes remains contended and requires further debate.

Dahlberg (2011) makes a useful characterisation of four common positions in digital democracy, where the ‘liberal-consumer’ and the ‘deliberative’ positions dominate mainstream thinking and practice, while other alternative positions (‘counter publics’ and ‘autonomous Marxist’) exist, but mostly in experimental or specific contexts. The liberal-consumer position conceives a self-sufficient, rational-strategic individual who acts in a competitive-aggregative democracy by “aggregating, calculating, choosing, competing, expressing, fundraising, informing, petitioning, registering, transacting, transmitting and voting” (p. 865). The deliberative subject is an inter-subjectively rational individual acting in a deliberative consensual democracy “agreeing, arguing, deliberating, disagreeing, informing, meeting, opinion forming, publicising, and reflecting” (p. 865).

Practice has been more homogeneous adopting the ‘liberal-consumer’ and ‘deliberative’ positions. Examples of the former include local and national government e-democracy initiatives; media politics sites, especially the ones providing ‘public opinion’ polling and ‘have your say’ comment systems; ‘independent’ e-democracy projects like mysociety.org; and civil society practices like Amnesty International’s digital campaigns, and online petitioning through sites like Change.org or Avaaz.org (Dahlberg, 2011, p. 858). On the other side, examples of the deliberative position include online government consultation projects (e.g. Your Priorities app and DemocracyOS.eu platform), writing and commentary of online citizen journalism in media sites; “online discussion forums of political interest groups; and the vast array of informal online debate on e-mail lists, web discussion boards, chat channels, blogs, social networking sites, and wikis” (p. 859). Recent developments not only include a mixture of both positions, but a more dynamic online-offline experience….

To shed a light on the understanding of this situation, it might be important to consider how rational choice institutionalism (RCI) explains the inherent logic of digital democracy. Rational choice institutionalism is a theoretical approach of ‘bounded rationality’, that is, it supposes rational utility-maximising actors playing in contexts constrained by institutions. According to Hall and Taylor (1996), this approach assumes rational actors to be incapable of reaching social optimal situations due to insufficient institutional configurations. The actors play strategic interactions in a configured scenario that affects “the range and sequence of alternatives on the choice-agenda or [provides] information and enforcement mechanisms that reduce uncertainty about the corresponding behaviour of others and allows ‘gains from exchange’, thereby leading actors toward particular calculations and potentially better social outcomes” (p. 945). RCI focuses on the reduction of transaction costs and the solution of the ‘principal-agent problem’, where “principals can monitor and enforce compliance on their agents” (p. 943)….(More)”.

The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation


Report by Philip Howard and Samantha Bradshaw: “…The report explores the tools, capacities, strategies and resources employed by global ‘cyber troops’, typically government agencies and political parties, to influence public opinion in 70 countries.

Key findings include:

  • Organized social media manipulation has more than doubled since 2017, with 70 countries using computational propaganda to manipulate public opinion.
  • In 45 democracies, politicians and political parties have used computational propaganda tools by amassing fake followers or spreading manipulated media to garner voter support.
  • In 26 authoritarian states, government entities have used computational propaganda as a tool of information control to suppress public opinion and press freedom, discredit criticism and oppositional voices, and drown out political dissent.
  • Foreign influence operations, primarily over Facebook and Twitter, have been attributed to cyber troop activities in seven countries: China, India, Iran, Pakistan, Russia, Saudi Arabia and Venezuela.
  • China has now emerged as a major player in the global disinformation order, using social media platforms to target international audiences with disinformation.
  • 25 countries are working with private companies or strategic communications firms offering a computational propaganda as a service.
  • Facebook remains the platform of choice for social media manipulation, with evidence of formally organised campaigns taking place in 56 countries….

The report explores the tools and techniques of computational propaganda, including the use of fake accounts – bots, humans, cyborgs and hacked accounts – to spread disinformation. The report finds:

  • 87% of countries used human accounts
  • 80% of countries used bot accounts
  • 11% of countries used cyborg accounts
  • 7% of countries used hacked or stolen accounts…(More)”.