Examining Civil Society Legitimacy


Saskia Brechenmacher and Thomas Carothers at Carnegie Endowment for International Peace: “Civil society is under stress globally as dozens of governments across multiple regions are reducing space for independent civil society organizations, restricting or prohibiting international support for civic groups, and propagating government-controlled nongovernmental organizations. Although civic activists in most places are no strangers to repression, this wave of anti–civil society actions and attitudes is the widest and deepest in decades. It is an integral part of two broader global shifts that raise concerns about the overall health of the international liberal order: the stagnation of democracy worldwide and the rekindling of nationalistic sovereignty, often with authoritarian features.

Attacks on civil society take myriad forms, from legal and regulatory measures to physical harassment, and usually include efforts to delegitimize civil society. Governments engaged in closing civil society spaces not only target specific civic groups but also spread doubt about the legitimacy of the very idea of an autonomous civic sphere that can activate and channel citizens’ interests and demands. These legitimacy attacks typically revolve around four arguments or accusations:

  • That civil society organizations are self-appointed rather than elected, and thus do not represent the popular will. For example, the Hungarian government justified new restrictions on foreign-funded civil society organizations by arguing that “society is represented by the elected governments and elected politicians, and no one voted for a single civil organization.”
  • That civil society organizations receiving foreign funding are accountable to external rather than domestic constituencies, and advance foreign rather than local agendas. In India, for example, the Modi government has denounced foreign-funded environmental NGOs as “anti-national,” echoing similar accusations in Egypt, Macedonia, Romania, Turkey, and elsewhere.
  • That civil society groups are partisan political actors disguised as nonpartisan civic actors: political wolves in citizen sheep’s clothing. Governments denounce both the goals and methods of civic groups as being illegitimately political, and hold up any contacts between civic groups and opposition parties as proof of the accusation.
  • That civil society groups are elite actors who are not representative of the people they claim to represent. Critics point to the foreign education backgrounds, high salaries, and frequent foreign travel of civic activists to portray them as out of touch with the concerns of ordinary citizens and only working to perpetuate their own privileged lifestyle.

Attacks on civil society legitimacy are particularly appealing for populist leaders who draw on their nationalist, majoritarian, and anti-elite positioning to deride civil society groups as foreign, unrepresentative, and elitist. Other leaders borrow from the populist toolbox to boost their negative campaigns against civil society support. The overall aim is clear: to close civil society space, governments seek to exploit and widen existing cleavages between civil society and potential supporters in the population. Rather than engaging with the substantive issues and critiques raised by civil society groups, they draw public attention to the real and alleged shortcomings of civil society actors as channels for citizen grievances and demands.

The widening attacks on the legitimacy of civil society oblige civil society organizations and their supporters to revisit various fundamental questions: What are the sources of legitimacy of civil society? How can civil society organizations strengthen their legitimacy to help them weather government attacks and build strong coalitions to advance their causes? And how can international actors ensure that their support reinforces rather than undermines the legitimacy of local civic activism?

To help us find answers to these questions, we asked civil society activists working in ten countries around the world—from Guatemala to Tunisia and from Kenya to Thailand—to write about their experiences with and responses to legitimacy challenges. Their essays follow here. We conclude with a final section in which we extract and discuss the key themes that emerge from their contributions as well as our own research…

  1. Saskia Brechenmacher and Thomas Carothers, The Legitimacy Landscape
  2. César Rodríguez-Garavito, Objectivity Without Neutrality: Reflections From Colombia
  3. Walter Flores, Legitimacy From Below: Supporting Indigenous Rights in Guatemala
  4. Arthur Larok, Pushing Back: Lessons From Civic Activism in Uganda
  5. Kimani Njogu, Confronting Partisanship and Divisions in Kenya
  6. Youssef Cherif, Delegitimizing Civil Society in Tunisia
  7. Janjira Sombatpoonsiri, The Legitimacy Deficit of Thailand’s Civil Society
  8. Özge Zihnioğlu, Navigating Politics and Polarization in Turkey
  9. Stefánia Kapronczay, Beyond Apathy and Mistrust: Defending Civic Activism in Hungary
  10. Zohra Moosa, On Our Own Behalf: The Legitimacy of Feminist Movements
  11. Nilda Bullain and Douglas Rutzen, All for One, One for All: Protecting Sectoral Legitimacy
  12. Saskia Brechenmacher and Thomas Carothers, The Legitimacy Menu.(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Leveraging the Power of Bots for Civil Society


Allison Fine & Beth Kanter  at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.

So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.

To Bot or Not to Bot?

History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to votecontact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.

And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.

But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….

The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:

  • What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
  • Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
  • Do we make it clear to the people using the bot when they are interacting with a bot?
  • Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
  • Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
  • In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
  • Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.

UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.

Behavior Change for Good Initiative


“At the Behavior Change for Good Initiativewe know that solving the mystery of enduring behavior change offers an enormous opportunity to improve lives. We unite an interdisciplinary team of scientists with leading practitioners in education, healthcare, and consumer financial services, all of whom seek to address the question: How can we make behavior change stick?…

We are developing an interactive digital platform to improve daily decisions about health, education, and savings. For the first time, a world-class team of scientific experts will be able to continually test and improve a behavior change program by seamlessly incorporating the latest insights from their research into massive random-assignment experiments. Their interactive digital platform seeks to improve daily health, education, and savings decisions of millions…(More)”.

Friends with Academic Benefits


The new or interesting story isn’t just that Valerie, Betsy, and Steve’s friends had different social and academic impacts, but that they had various types of friendship networks. My research points to the importance of network structure—that is, the relationships among their friends—for college students’ success. Different network structures result from students’ experiences—such as race- and class-based marginalization on this predominantly White campus—and shape students’ experiences by helping or hindering them academically and socially.

I used social network techniques to analyze the friendship networks of 67 MU students and found they clumped into three distinctive types—tight-knitters, compartmentalizers, and samplers. Tight-knitters have one densely woven friendship group in which nearly all their friends are friends with one another. Compartmentalizers’ friends form two to four clusters, where friends know each other within clusters but rarely across them. And samplers make a friend or two from a variety of places, but the friends remain unconnected to each other. As shown in the figures, tight-knitters’ networks resemble a ball of yarn, compartmentalizers’ a bow-tie, and samplers’ a daisy. In these network maps, the person I interviewed is at the center and every other dot represents a friend, with lines representing connections among friends (that is, whether the person I interviewed believed that the two people knew each other). During the interviews, participants defined what friendship meant to them and listed as many friends as they liked (ranging from three to 45).

The students’ friendship network types influenced how friends matter for their academic and social successes and failures. Like Valerie, most Black and Latina/o students were tight-knitters. Their dense friendship networks provided a sense of home as a minority on a predominantly White campus. Tight-knit networks could provide academic support and motivation (as they did for Valerie) or pull students down academically if their friends lacked academic skills and motivation. Most White students were compartmentalizers like Betsy, and they succeeded with moderate levels of social support from friends and with social support and academic support from different clusters. Samplers came from a range of class and race backgrounds. Like Steve, samplers typically succeeded academically without relying on their friends. Friends were fun people who neither help nor hurt them academically. Socially, however, samplers reported feeling lonely and lacking social support….(More)”.

How open contracting helped fix Colombia’s biggest school meal program


Open Contracting Stories: “In the early hours of the morning, in an industrial area of Colombia’s capital, Bogotá, a warehouse hums with workers, their faces barely visible under white masks and hair nets. The walls are stacked with colored plastic crates. Filled with various fruit, cereals, drinks, and desserts, they will be packed into refrigerated trucks and delivered to public schools all over Bogotá before most children have settled in for their first classes. A similar operation is underway in five other warehouses across the city, as part of a $170 million program to ensure fresh, nutritious food reaches more than 800,000 hungry students between the age of four and 18 every day.

Food delivery and its quality have not always been so streamlined in the past. High poverty rates in the city mean that many children consume their main meal for the day at school. And getting those refreshments to the schools at over 700 locations each day is a huge logistical challenge. With a population of nearly nine million inhabitants, Bogotá is one of the largest cities in Latin America and one of the most traffic congested cities in the world.

Then there’s the notorious inefficiency and corruption in the provision of school meals across Colombia. Suppliers throughout the country have regularly been accused of failing to deliver food or inflating prices in scandalsthat made national headlines. In the city of Cartagena, chicken breasts sent to schools cost four times the amount as those at markets and the children reportedly never received 30 million meals. In the Amazonas region, an investigation by the Comptroller General found the price of a food contract was inflated by more than 297 million pesos (US$100,000), including pasta purchased at more than three times the market rate….

The solution, based on the pilot and these conversations, was to divide the process in two to cut out the middlemen and reduce transaction costs. The first part was sourcing the food. The second was to organize the assembly and distribution of the snacks to every school.

Suppliers are now commissioned by participating in a tender for a framework agreement that sets the general conditions and price caps, while quantities and final prices are established when a purchase is needed.

“In a normal contract, we say, for example, ‘you will give me five apples and they will cost 100.’ In a framework agreement, we say ‘you will provide me apples for one year at a maximum price of X’, and each time we put up a purchase order, we have several suppliers and capped prices. So they bid on purchase orders when needed,” explains Penagos.

Each food item has several suppliers under this new framework agreement. So if one supplier can’t fulfill the purchase order or has a logistical issue, another supplier can take over. This prevents a situation where suppliers have so much bargaining power that they can set their own prices and conditions knowing that the administration can’t refuse because it would mean the children don’t receive the food.

The purchase orders are filled each month on the government’s online marketplace, with the details of the order published for the public to see which supplier won…

Sharing information with the public, parents and potential suppliers was an important part of the plan, too. Details about how the meals were procured became available on a public online platform for all to see, in a way that was easy to understand.

Through a public awareness campaign, Angulo, the education secretary, told the public about the faults in the market that the secretariat had detected. They had changed the process of public contracting to be more transparent….(More).

Behavioral Economics: Are Nudges Cost-Effective?


Carla Fried at UCLA Anderson Review: “Behavioral science does not suffer from a lack of academic focus. A Google Scholar search for the term delivers more than three million results.

While there is an abundance of research into how human nature can muck up our decision making process and the potential for well-placed nudges to help guide us to better outcomes, the field has kept rather mum on a basic question: Are behavioral nudges cost-effective?

That’s an ever more salient question as the art of the nudge is increasingly being woven into public policy initiatives. In 2009, the Obama administration set up a nudge unit within the White House Office of Information and Technology, and a year later the U.K. government launched its own unit. Harvard’s Cass Sunstein, co-author of the book Nudge, headed the U.S. effort. His co-author, the University of Chicago’s Richard Thaler — who won the 2017 Nobel Prize in Economics — helped develop the U.K.’s Behavioral Insights office. Nudge units are now humming away in other countries, including Germany and Singapore, as well as at the World Bank, various United Nations agencies and the Organisation for Economic Co-operation and Development (OECD).

Given the interest in the potential for behavioral science to improve public policy outcomes, a team of nine experts, including UCLA Anderson’s Shlomo Benartzi, Sunstein and Thaler, set out to explore the cost-effectiveness of behavioral nudges relative to more traditional forms of government interventions.

In addition to conducting their own experiments, the researchers looked at published research that addressed four areas where public policy initiatives aim to move the needle to improve individuals’ choices: saving for retirement, applying to college, energy conservation and flu vaccinations.

For each topic, they culled studies that focused on both nudge approaches and more traditional mandates such as tax breaks, education and financial incentives, and calculated cost-benefit estimates for both types of studies. Research used in this study was published between 2000 and 2015. All cost estimates were inflation-adjusted…

The study itself should serve as a nudge for governments to consider adding nudging to their policy toolkits, as this approach consistently delivered a high return on investment, relative to traditional mandates and policies….(More)”.

Replicating the Justice Data Lab in the USA: Key Considerations


Blog by Tracey Gyateng and Tris Lumley: “Since 2011, NPC has researched, supported and advocated for the development of impact-focussed Data Labs in the UK. The goal has been to unlock government administrative data so that organisations (primarily nonprofits) who provide a social service can understand the impact of their services on the people who use them.

So far, one of these Data Labs has been developed to measure re-offending outcomes- the Justice Data Lab-, and others are currently being piloted for employment and education. Given our seven years of work in this area, we at NPC have decided to reflect on the key factors needed to create a Data Lab with our report: How to Create an Impact Data Lab. This blog outlines these factors, examines whether they are present in the USA, and asks what the next steps should be — drawing on the research undertaken with the Governance Lab….Below we examine the key factors and to what extent they appear to be present within the USA.

Environment: A broad culture that supports impact measurement. Similar to the UK, nonprofits in the USA are increasingly measuring the impact they have had on the participants of their service and sharing the difficulties of undertaking robust, high quality evaluations.

Data: Individual person-level administrative data. A key difference between the two countries is that, in the USA, personal data on social services tends to be held at a local, rather than central level. In the UK social services data such as reoffending, education and employment are collated into a central database. In the USA, the federal government has limited centrally collated personal data, instead this data can be found at state/city level….

A leading advocate: A Data Lab project team, and strong networks. Data Labs do not manifest by themselves. They requires a lead agency to campaign with, and on behalf of, nonprofits to set out a persuasive case for their development. In the USA, we have developed a partnership with the Governance Lab to seek out opportunities where Data Labs can be established but given the size of the country, there is scope for further collaborations/ and or advocates to be identified and supported.

Customers: Identifiable organisations that would use the Data Lab. Initial discussions with several US nonprofits and academia indicate support for a Data Lab in their context. Broad consultation based on an agreed region and outcome(s) will be needed to fully assess the potential customer base.

Data owners: Engaged civil servants. Generating buy-in and persuading various stakeholders including data owners, analysts and politicians is a critical part of setting up a data lab. While the exact profiles of the right people to approach can only be assessed once a region and outcome(s) of interest have been chosen, there are encouraging signs, such as the passing of the Foundations for Evidence-Based Policy Making Act of 2017 in the house of representatives which, among other things, mandates the appointment of “Chief Evaluation Officers” in government departments- suggesting that there is bipartisan support for increased data-driven policy evaluation.

Legal and ethical governance: A legal framework for sharing data. In the UK, all personal data is subject to data protection legislation, which provides standardised governance for how personal data can be processed across the country and within the European Union. A universal data protection framework does not exist within the USA, therefore data sharing agreements between customers and government data-owners will need to be designed for the purposes of Data Labs, unless there are existing agreements that enable data sharing for research purposes. This will need to be investigated at the state/city level of a desired Data Lab.

Funding: Resource and support for driving the set-up of the Data Lab. Most of our policy lab case studies were funded by a mixture of philanthropy and government grants. It is expected that a similar mixed funding model will need to be created to establish Data Labs. One alternative is the model adopted by the Washington State Institute for Public Policy (WSIPP), which was created by the Washington State Legislature and is funded on a project basis, primarily by the state. Additionally funding will be needed to enable advocates of a Data Lab to campaign for the service….(More)”.

How Democracy Can Survive Big Data


Colin Koopman in The New York Times: “…The challenge of designing ethics into data technologies is formidable. This is in part because it requires overcoming a century-long ethos of data science: Develop first, question later. Datafication first, regulation afterward. A glimpse at the history of data science shows as much.

The techniques that Cambridge Analytica uses to produce its psychometric profiles are the cutting edge of data-driven methodologies first devised 100 years ago. The science of personality research was born in 1917. That year, in the midst of America’s fevered entry into war, Robert Sessions Woodworth of Columbia University created the Personal Data Sheet, a questionnaire that promised to assess the personalities of Army recruits. The war ended before Woodworth’s psychological instrument was ready for deployment, but the Army had envisioned its use according to the precedent set by the intelligence tests it had been administering to new recruits under the direction of Robert Yerkes, a professor of psychology at Harvard at the time. The data these tests could produce would help decide who should go to the fronts, who was fit to lead and who should stay well behind the lines.

The stakes of those wartime decisions were particularly stark, but the aftermath of those psychometric instruments is even more unsettling. As the century progressed, such tests — I.Q. tests, college placement exams, predictive behavioral assessments — would affect the lives of millions of Americans. Schoolchildren who may have once or twice acted out in such a way as to prompt a psychometric evaluation could find themselves labeled, setting them on an inescapable track through the education system.

Researchers like Woodworth and Yerkes (or their Stanford colleague Lewis Terman, who formalized the first SAT) did not anticipate the deep consequences of their work; they were too busy pursuing the great intellectual challenges of their day, much like Mr. Zuckerberg in his pursuit of the next great social media platform. Or like Cambridge Analytica’s Christopher Wylie, the twentysomething data scientist who helped build psychometric profiles of two-thirds of all Americans by leveraging personal information gained through uninformed consent. All of these researchers were, quite understandably, obsessed with the great data science challenges of their generation. Their failure to consider the consequences of their pursuits, however, is not so much their fault as it is our collective failing.

For the past 100 years we have been chasing visions of data with a singular passion. Many of the best minds of each new generation have devoted themselves to delivering on the inspired data science promises of their day: intelligence testing, building the computer, cracking the genetic code, creating the internet, and now this. We have in the course of a single century built an entire society, economy and culture that runs on information. Yet we have hardly begun to engineer data ethics appropriate for our extraordinary information carnival. If we do not do so soon, data will drive democracy, and we may well lose our chance to do anything about it….(More)”.