Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade


Report by Pew Research Center: “Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk.

They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people’s newsfeeds and video choices.

They recognize people’s facestranslate languages and suggest how to complete people’s sentences or search queries. They can “read” people’s emotions. They beat them at sophisticated games. They write news stories, paint in the style of Vincent Van Gogh and create music that sounds quite like the Beatles and Bach.

Corporations and governments are charging evermore expansively into AI development. Increasingly, nonprogrammers can set up off-the-shelf, pre-built AI tools as they prefer.

As this unfolds, a number of experts and advocates around the world have become worried about the long-term impact and implications of AI applications. They have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. Dozens of convenings and study groups have issued papers proposing what the tenets of ethical AI design should be, and government working teams have tried to address these issues. In light of this, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030….(More)”

Crisis Innovation Policy from World War II to COVID-19


Paper by Daniel P. Gross & Bhaven N. Sampat: “Innovation policy can be a crucial component of governments’ responses to crises. Because speed is a paramount objective, crisis innovation may also require different policy tools than innovation policy in non-crisis times, raising distinct questions and tradeoffs. In this paper, we survey the U.S. policy response to two crises where innovation was crucial to a resolution: World War II and the COVID-19 pandemic. After providing an overview of the main elements of each of these efforts, we discuss how they compare, and to what degree their differences reflect the nature of the central innovation policy problems and the maturity of the U.S. innovation system. We then explore four key tradeoffs for crisis innovation policy—top-down vs. bottom-up priority setting, concentrated vs. distributed funding, patent policy, and managing disruptions to the innovation system—and provide a logic for policy choices. Finally, we describe the longer-run impacts of the World War II effort and use these lessons to speculate on the potential long-run effects of the COVID-19 crisis on innovation policy and the innovation system….(More)”.

Bridging the global digital divide: A platform to advance digital development in low- and middle-income countries


Paper by George Ingram: “The world is in the midst of a fast-moving, Fourth Industrial Revolution (also known as 4IR or Industry 4.0), driven by digital innovation in the use of data, information, and technology. This revolution is affecting everything from how we communicate, to where and how we work, to education and health, to politics and governance. COVID-19 has accelerated this transformation as individuals, companies, communities, and governments move to virtual engagement. We are still discovering the advantages and disadvantages of a digital world.

This paper outlines an initiative that would allow the United States, along with a range of public and private partners, to seize the opportunity to reduce the digital divide between nations and people in a way that benefits inclusive economic advancement in low- and middle-income countries, while also advancing the economic and strategic interests of the United States and its partner countries.

As life increasingly revolves around digital technologies and innovation, countries are in a race to digitalize at a speed that threatens to leave behind the less advantaged—countries and underserved groups. Data in this paper documents the scope of the digital divide. With the Sustainable Development Goals (SDGs), the world committed to reduce poverty and advance all aspects of the livelihood of nations and people. Countries that fail to progress along the path to 5G broadband cellular networks will be unable to unlock the benefits of the digital revolution and be left behind. Donors are recognizing this and offering solutions, but in a one-off, disconnected fashion. Absent a comprehensive, partnership approach, that takes advantage of the comparative advantage of each, these well-intended efforts will not aggregate to the scale and speed required by the challenge….(More)”.

What Data About You Can the Government Get From Big Tech?


 Jack Nicas at the New York Times: “The Justice Department, starting in the early days of the Trump administration, secretly sought data from some of the biggest tech companies about journalistsDemocratic lawmakers and White House officials as part of wide-ranging investigations into leaks and other matters, The New York Times reported last week.

The revelations, which put the companies in the middle of a clash over the Trump administration’s efforts to find the sources of news coverage, raised questions about what sorts of data tech companies collect on their users, and how much of it is accessible to law enforcement authorities.

Here’s a rundown:

All sorts. Beyond basic data like users’ names, addresses and contact information, tech companies like Google, Apple, Microsoft and Facebook also often have access to the contents of their users’ emails, text messages, call logs, photos, videos, documents, contact lists and calendars.

Most of it is. But which data law enforcement can get depends on the sort of request they make.

Perhaps the most common and basic request is a subpoena. U.S. government agencies and prosecutors can often issue subpoenas without approval from a judge, and lawyers can issue them as part of open court cases. Subpoenas are often used to cast a wide net for basic information that can help build a case and provide evidence needed to issue more powerful requests….(More)”.

Be Skeptical of Thought Leaders


Book Review by Evan Selinger: “Corporations regularly advertise their commitment to “ethics.” They often profess to behave better than the law requires and sometimes may even claim to make the world a better place. Google, for example, trumpets its commitment to “responsibly” developing artificial intelligence and swears it follows lofty AI principles that include being “socially beneficial” and “accountable to people,” and that “avoid creating or reinforcing unfair bias.”

Google’s recent treatment of Timnit Gebru, the former co-leader of its ethical AI team, tells another story. After Gebru went through an antagonistic internal review process for a co-authored paper that explores social and environmental risks and expressed concern over justice issues within Google, the company didn’t congratulate her for a job well done. Instead, she and vocally supportive colleague Margaret Mitchell (the other co-leader) were “forced out.” Google’s behavior “perhaps irreversibly damaged” the company’s reputation. It was hard not to conclude that corporate values misalign with the public good.

Even as tech companies continue to display hypocrisy, there might still be good reasons to have high hopes for their behavior in the future. Suppose corporations can do better than ethics washingvirtue signaling, and making incremental improvements that don’t challenge aggressive plans for financial growth. If so, society desperately needs to know what it takes to bring about dramatic change. On paper, Susan Liautaud is the right person to turn to for help. She has impressive academic credentials (a PhD in Social Policy from the London School of Economics and a JD from Columbia University Law School), founded and manages an ethics consulting firm with an international reach, and teaches ethics courses at Stanford University.

In The Power of Ethics: How to Make Good Choices in a Complicated World, Liautaud pursues a laudable goal: democratize the essential practical steps for making responsible decisions in a confusing and complex world. While the book is pleasantly accessible, it has glaring faults. With so much high-quality critical journalistic coverage of technologies and tech companies, we should expect more from long-form analysis.

Although ethics is more widely associated with dour finger-waving than aspirational world-building, Liautaud mostly crafts an upbeat and hopeful narrative, albeit not so cheerful that she denies the obvious pervasiveness of shortsighted mistakes and blatant misconduct. The problem is that she insists ethical values and technological development pair nicely. Big Tech might be exerting increasing control over our lives, exhibiting an oversized influence on public welfare through incursions into politics, education, social communication, space travel, national defense, policing, and currency — but this doesn’t in the least quell her enthusiasm, which remains elevated enough throughout her book to affirm the power of the people. Hyperbolically, she declares, “No matter where you stand […] you have the opportunity to prevent the monopolization of ethics by rogue actors, corporate giants, and even well-intentioned scientists and innovators.”…(More)“.

Platform as a Rule Maker: Evidence from Airbnb’s Cancellation Policies


Paper by Jian Jia, Ginger Zhe Jin & Liad Wagman: “Digital platforms are not only match-making intermediaries but also establish internal rules that govern all users in their ecosystems. To better understand the governing role of platforms, we study two Airbnb pro-guest rules that pertain to guest and host cancellations, using data on Airbnb and VRBO listings in 10 US cities. We demonstrate that such pro-guest rules can drive demand and supply to and from the platform, as a function of the local platform competition between Airbnb and VRBO. Our results suggest that platform competition sometimes dampens a platform wide pro-guest rule and sometimes reinforces it, often with heterogeneous effects on different hosts. This implies that platform competition does not necessarily mitigate a platform’s incentive to treat the two sides asymmetrically, and any public policy in platform competition must consider its implication on all sides….(More)”.

Citizen science allows people to ‘really know’ their communities


UGAResearch: “Local populations understand their communities best. They’re familiar both with points of pride and with areas that could be improved. But determining the nature of those improvements from best practices, as well as achieving community consensus on implementation, can present a different set of challenges.

Jerry Shannon, associate professor of geography in the Franklin College of Arts & Sciences, worked with a team of researchers to introduce a citizen science approach in 11 communities across Georgia, from Rockmart to Monroe to Millen. This work combines local knowledge with emerging digital technologies to bolster community-driven efforts in multiple communities in rural Georgia. His research was detailed in a paper, “‘Really Knowing’ the Community: Citizen Science, VGI and Community Housing Assessments” published in December in the Journal of Planning Education and Research.

Shannon worked with the Georgia Initiative for Community Housing, managed out of the College of Family and Consumer Sciences (FACS), to create tools for communities to evaluate and launch plans to address their housing needs and revitalization. This citizen science effort resulted in a more diverse and inclusive body of data that incorporated local perspectives.

“Through this project, we hope to further support and extend these community-driven efforts to assure affordable, quality housing,” said Shannon. “Rural communities don’t have the resources internally to do this work themselves. We provide training and tools to these communities.”

As part of their participation in the GICH program, each Georgia community assembled a housing team consisting of elected officials, members of community organizations and housing professionals such as real estate agents. The team recruited volunteers from student groups and religious organizations to conduct so-called “windshield surveys,” where participants work from their vehicle or walk the neighborhoods….(More)”

Living in Data: A Citizen’s Guide to a Better Information Future


Book by Jer Thorp: “To live in data in the twenty-first century is to be incessantly extracted from, classified and categorized, statisti-fied, sold, and surveilled. Data—our data—is mined and processed for profit, power, and political gain. In Living in Data, Thorp asks a crucial question of our time: How do we stop passively inhabiting data, and instead become active citizens of it?

Threading a data story through hippo attacks, glaciers, and school gymnasiums, around colossal rice piles, and over active minefields, Living in Data reminds us that the future of data is still wide open, that there are ways to transcend facts and figures and to find more visceral ways to engage with data, that there are always new stories to be told about how data can be used.

Punctuated with Thorp’s original and informative illustrations, Living in Data not only redefines what data is, but reimagines who gets to speak its language and how to use its power to create a more just and democratic future. Timely and inspiring, Living in Data gives us a much-needed path forward….(More)”.

AI and Shared Prosperity


Paper by Katya Klinova and Anton Korinek: “Future advances in AI that automate away human labor may have stark implications for labor markets and inequality. This paper proposes a framework to analyze the effects of specific types of AI systems on the labor market, based on how much labor demand they will create versus displace, while taking into account that productivity gains also make society wealthier and thereby contribute to additional labor demand. This analysis enables ethically-minded companies creating or deploying AI systems as well as researchers and policymakers to take into account the effects of their actions on labor markets and inequality, and therefore to steer progress in AI in a direction that advances shared prosperity and an inclusive economic future for all of humanity…(More)”.

Confronting Bias: BSA’s Framework to Build Trust in AI


BSA Software Alliance: “The Framework is a playbook organizations can use to enhance trust in their AI systems through risk management processes that promote fairness, transparency, and accountability. It can be leveraged by organizations that develop AI systems and companies that acquire and deploy such systems as the basis for:
– Internal Process Guidance. The Framework can be used as a tool for organizing and establishing roles,
responsibilities, and expectations for internal risk management processes.
– Training, Awareness, and Education. The Framework can be used to build internal training and education
programs for employees involved in developing and using AI systems, and for educating executives about
the organization’s approach to managing AI bias risks.
– Supply Chain Assurance and Accountability. AI developers and organizations that deploy AI
systems can use the Framework as a basis for communicating and coordinating about their respective roles and responsibilities for managing AI risks throughout a system’s lifecycle.
– Trust and Confidence. The Framework can help organizations communicate information about a
product’s features and its approach to mitigating AI bias risks to a public audience. In that sense, the
Framework can help organizations communicate to the public about their commitment to building
ethical AI systems.
– Incident Response. Following an unexpected incident, the processes and documentation set forth
in the Framework can serve as an audit trail that can help organizations quickly diagnose and remediate
potential problems…(More)”