Towards a new generation of public services: Designers Italia’s design kits


Matteo DeSanti: “Our lives are becoming more and more digital and we expect the public services we use every day to be digital as well: booking a medical examination, receiving a pension, paying the waste tax, obtaining an authorization or a document. Moreover, we would like for all digital public services to have standards of quality comparable to the best private services we use to inform ourselves, make purchases or reservations. When using a digital public service, we would like to have concrete advantages, in particular: higher quality and ease of use, better accessibility, more flexibility and speed.

As the Three-Year Plan for Digital Transformation explains, this is a unique opportunity to design a new generation of public services making citizens and businesses the starting point rather than simply complying with rules and ordinances. We need the right professionalism, the right skills and the right tools: this is why we created Designers Italia and it is also why today we are launching the new design system.

The Public Service Design Kits introduce a method of work based on user research, the rapid exploration of solutions and the development of effective and sustainable products. Also, the Public Service Design Kits also strongly push towards higher standards, providing interface components and codeso that the country’s thousands of administrations don’t have to waste time “inventing the wheel every time.”

The fourteen kits we provide cover all aspects of a service design process, from research to user interface, from prototyping to development and each kit offers different advantages….(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Artificial Unintelligence


Book by Meredith Broussard: “A guide to understanding the inner workings and outer limits of technology and why we should never assume that computers always get it right.

In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work. Broussard, a software developer and journalist, reminds us that there are fundamental limits to what we can (and should) do with technology. With this book, she offers a guide to understanding the inner workings and outer limits of technology—and issues a warning that we should never assume that computers always get things right.

Making a case against technochauvinism—the belief that technology is always the solution—Broussard argues that it’s just not true that social problems would inevitably retreat before a digitally enabled Utopia. To prove her point, she undertakes a series of adventures in computer programming. She goes for an alarming ride in a driverless car, concluding “the cyborg future is not coming any time soon”; uses artificial intelligence to investigate why students can’t pass standardized tests; deploys machine learning to predict which passengers survived the Titanic disaster; and attempts to repair the U.S. campaign finance system by building AI software. If we understand the limits of what we can do with technology, Broussard tells us, we can make better choices about what we should do with it to make the world better for everyone…(More)”.

Using Data to Inform the Science of Broadening Participation


Donna K. Ginther at the American Behavioral Scientist: “In this article, I describe how data and econometric methods can be used to study the science of broadening participation. I start by showing that theory can be used to structure the approach to using data to investigate gender and race/ethnicity differences in career outcomes. I also illustrate this process by examining whether women of color who apply for National Institutes of Health research funding are confronted with a double bind where race and gender compound their disadvantage relative to Whites. Although high-quality data are needed for understanding the barriers to broadening participation in science careers, it cannot fully explain why women and underrepresented minorities are less likely to be scientists or have less productive science careers. As researchers, it is important to use all forms of data—quantitative, experimental, and qualitative—to deepen our understanding of the barriers to broadening participation….(More)”.

A survey of incentive engineering for crowdsourcing


Conor MuldoonMichael J. O’Grady and Gregory M. P. O’Hare in the Knowledge Engineering Review: “With the growth of the Internet, crowdsourcing has become a popular way to perform intelligence tasks that hitherto would be either performed internally within an organization or not undertaken due to prohibitive costs and the lack of an appropriate communications infrastructure.

In crowdsourcing systems, whereby multiple agents are not under the direct control of a system designer, it cannot be assumed that agents will act in a manner that is consistent with the objectives of the system designer or principal agent. In situations whereby agents’ goals are to maximize their return in crowdsourcing systems that offer financial or other rewards, strategies will be adopted by agents to game the system if appropriate mitigating measures are not put in place.

The motivational and incentivization research space is quite large; it incorporates diverse techniques from a variety of different disciplines including behavioural economics, incentive theory, and game theory. This paper specifically focusses on game theoretic approaches to the problem in the crowdsourcing domain and places it in the context of the wider research landscape. It provides a survey of incentive engineering techniques that enable the creation of apt incentive structures in a range of different scenarios….(More)”.

Digitalization and Public Sector Transformations


Book by Jannick Schou and Morten Hjelholt: “This book provides a study of governmental digitalization, an increasingly important area of policymaking within advanced capitalist states. It dives into a case study of digitalization efforts in Denmark, fusing a national policy study with local institutional analysis. Denmark is often framed as an international forerunner in terms of digitalizing its public sector and thus provides a particularly instructive setting for understanding this new political instrument.

Advancing a cultural political economic approach, Schou and Hjelholt argue that digitalization is far from a quick technological fix. Instead, this area must be located against wider transformations within the political economy of capitalist states. Doing so, the book excavates the political roots of digitalization and reveals its institutional consequences. It shows how new relations are being formed between the state and its citizens.

Digitalization and Public Sector Transformations pushes for a renewed approach to governmental digitalization and will be of interest to scholars working in the intersections of critical political economy, state theory and policy studies…(More)”.

Lessons from DataRescue: The Limits of Grassroots Climate Change Data Preservation and the Need for Federal Records Law Reform


Essay by Sarah Lamdan at the University of Pennsylvania Law Review: “Shortly after Donald Trump’s victory in the 2016 Presidential election, but before his inauguration, a group of concerned scholars organized in cities and college campuses across the United States, starting with the University of Pennsylvania, to prevent climate change data from disappearing from government websites. The move was led by Michelle Murphy, a scholar who had previously observed the destruction of climate change data and muzzling of government employees in Canadian Prime Minister Stephen Harper’s administration. The “guerrilla archiving” project soon swept the nation, drawing media attention as its volunteers scraped and preserved terabytes of climate change and other environmental data and materials from .gov websites. The archiving project felt urgent and necessary, as the federal government is the largest collector and archive of U.S. environmental data and information.

As it progressed, the guerrilla archiving movement became more defined: two organizations developed, the DataRefuge at the University of Pennsylvania, and the Environmental Data & Governance Initiative (EDGI), which was a national collection of academics and non-profits. These groups co-hosted data gathering sessions called DataRescue events. I joined EDGI to help members work through administrative law concepts and file Freedom of Information Act (FOIA) requests. The day-long archiving events were immensely popular and widely covered by media outlets. Each weekend, hundreds of volunteers would gather to participate in DataRescue events in U.S. cities. I helped organize the New York DataRescue event, which was held less than a month after the initial event in Pennsylvania. We had to turn people away as hundreds of local volunteers lined up to help and dozens more arrived in buses and cars, exceeding the space constraints of NYU’s cavernous MakerSpace engineering facility. Despite the popularity of the project, however, DataRescue’s goals seemed far-fetched: how could thousands of private citizens learn the contours of multitudes of federal environmental information warehouses, gather the data from all of them, and then re-post the materials in a publicly accessible format?…(More)”.

A Race to the Top? The Aid Transparency Index and the Social Power of Global Performance Indicators


Paper by Dan Honig and Catherine Weaver: “Recent studies on global performance indicators (GPIs) reveal the distinct power that non-state actors can accrue and exercise in world politics. How and when does this happen? Using a mixed-methods approach, we examine the impact of the Aid Transparency Index (ATI), an annual rating and rankings index produced by the small UK-based NGO Publish What You Fund.

The ATI seeks to shape development aid donors’ behavior with respect to their transparency – the quality and kind of information they publicly disclose. To investigate the ATI’s effect, we construct an original panel dataset of donor transparency performance before and after ATI inclusion (2006-2013) to test whether, and which, donors alter their behavior in response to inclusion in the ATI. To further probe the causal mechanisms that explain variations in donor behavior we use qualitative research, including over 150 key informant interviews conducted between 2010-2017.

Our analysis uncovers the conditions under which the ATI influences powerful aid donors. Moreover, our mixed methods evidence reveals how this happens. Consistent with Kelley & Simmons’ central argument that GPIs exercise influence via social pressure, we find that the ATI shapes donor behavior primarily via direct effects on elites: the diffusion of professional norms, organizational learning, and peer pressure….(More)”.

Use our personal data for the common good


Hetan Shah at Nature: “Data science brings enormous potential for good — for example, to improve the delivery of public services, and even to track and fight modern slavery. No wonder researchers around the world — including members of my own organization, the Royal Statistical Society in London — have had their heads in their hands over headlines about how Facebook and the data-analytics company Cambridge Analytica might have handled personal data. We know that trustworthiness underpins public support for data innovation, and we have just seen what happens when that trust is lost….But how else might we ensure the use of data for the public good rather than for purely private gain?

Here are two proposals towards this goal.

First, governments should pass legislation to allow national statistical offices to gain anonymized access to large private-sector data sets under openly specified conditions. This provision was part of the United Kingdom’s Digital Economy Act last year and will improve the ability of the UK Office for National Statistics to assess the economy and society for the public interest.

My second proposal is inspired by the legacy of John Sulston, who died earlier this month. Sulston was known for his success in advocating for the Human Genome Project to be openly accessible to the science community, while a competitor sought to sequence the genome first and keep data proprietary.

Like Sulston, we should look for ways of making data available for the common interest. Intellectual-property rights expire after a fixed time period: what if, similarly, technology companies were allowed to use the data that they gather only for a limited period, say, five years? The data could then revert to a national charitable corporation that could provide access to certified researchers, who would both be held to account and be subject to scrutiny that ensure the data are used for the common good.

Technology companies would move from being data owners to becoming data stewards…(More)” (see also http://datacollaboratives.org/).

Leveraging the Power of Bots for Civil Society


Allison Fine & Beth Kanter  at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.

So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.

To Bot or Not to Bot?

History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to votecontact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.

And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.

But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….

The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:

  • What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
  • Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
  • Do we make it clear to the people using the bot when they are interacting with a bot?
  • Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
  • Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
  • In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
  • Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.