Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade


Report by Pew Research Center: “Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk.

They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people’s newsfeeds and video choices.

They recognize people’s facestranslate languages and suggest how to complete people’s sentences or search queries. They can “read” people’s emotions. They beat them at sophisticated games. They write news stories, paint in the style of Vincent Van Gogh and create music that sounds quite like the Beatles and Bach.

Corporations and governments are charging evermore expansively into AI development. Increasingly, nonprogrammers can set up off-the-shelf, pre-built AI tools as they prefer.

As this unfolds, a number of experts and advocates around the world have become worried about the long-term impact and implications of AI applications. They have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. Dozens of convenings and study groups have issued papers proposing what the tenets of ethical AI design should be, and government working teams have tried to address these issues. In light of this, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030….(More)”

Crisis Innovation Policy from World War II to COVID-19


Paper by Daniel P. Gross & Bhaven N. Sampat: “Innovation policy can be a crucial component of governments’ responses to crises. Because speed is a paramount objective, crisis innovation may also require different policy tools than innovation policy in non-crisis times, raising distinct questions and tradeoffs. In this paper, we survey the U.S. policy response to two crises where innovation was crucial to a resolution: World War II and the COVID-19 pandemic. After providing an overview of the main elements of each of these efforts, we discuss how they compare, and to what degree their differences reflect the nature of the central innovation policy problems and the maturity of the U.S. innovation system. We then explore four key tradeoffs for crisis innovation policy—top-down vs. bottom-up priority setting, concentrated vs. distributed funding, patent policy, and managing disruptions to the innovation system—and provide a logic for policy choices. Finally, we describe the longer-run impacts of the World War II effort and use these lessons to speculate on the potential long-run effects of the COVID-19 crisis on innovation policy and the innovation system….(More)”.

Bridging the global digital divide: A platform to advance digital development in low- and middle-income countries


Paper by George Ingram: “The world is in the midst of a fast-moving, Fourth Industrial Revolution (also known as 4IR or Industry 4.0), driven by digital innovation in the use of data, information, and technology. This revolution is affecting everything from how we communicate, to where and how we work, to education and health, to politics and governance. COVID-19 has accelerated this transformation as individuals, companies, communities, and governments move to virtual engagement. We are still discovering the advantages and disadvantages of a digital world.

This paper outlines an initiative that would allow the United States, along with a range of public and private partners, to seize the opportunity to reduce the digital divide between nations and people in a way that benefits inclusive economic advancement in low- and middle-income countries, while also advancing the economic and strategic interests of the United States and its partner countries.

As life increasingly revolves around digital technologies and innovation, countries are in a race to digitalize at a speed that threatens to leave behind the less advantaged—countries and underserved groups. Data in this paper documents the scope of the digital divide. With the Sustainable Development Goals (SDGs), the world committed to reduce poverty and advance all aspects of the livelihood of nations and people. Countries that fail to progress along the path to 5G broadband cellular networks will be unable to unlock the benefits of the digital revolution and be left behind. Donors are recognizing this and offering solutions, but in a one-off, disconnected fashion. Absent a comprehensive, partnership approach, that takes advantage of the comparative advantage of each, these well-intended efforts will not aggregate to the scale and speed required by the challenge….(More)”.

Introducing the AI Localism Repository


The GovLab: “Artificial intelligence is here to stay. As this technology advances—both in its complexity and ubiquity across our societies—decision-makers must address the growing nuances of AI regulation and oversight. Early last year, The GovLab’s Stefaan Verhulst and Mona Sloane coined the term “AI localism” to describe how local governments have stepped up to regulate AI policies, design governance frameworks, and monitor AI use in the public sector. 

While top-level regulation remains scant, many municipalities have taken to addressing AI use in their communities. Today, The GovLab is proud to announce the soft launch of the AI Localism Repository. This living platform is a curated collection of AI localism initiatives across the globe categorized by geographic regions, types of technological and governmental innovation in AI regulation, mechanisms of governance, and sector focus. 

We invite visitors to explore this repository and learn more about the inventive measures cities are taking to control how, when, and why AI is being used by public authorities. We also welcome additional case study submissions, which can be sent to us via Google Form….(More)”

Privacy Tech’s Third Generation


“A Review of the Emerging Privacy Tech Sector” by Privacy Tech Alliance and Future of Privacy Forum: “As we enter the third phase of development of the privacy tech market, purchasers are demanding more integrated solutions, product offerings are more comprehensive, and startup valuations are higher than ever, according to a new report from the Future of Privacy Forum and Privacy Tech Alliance. These factors are leading to companies providing a wider range of services, acting as risk management platforms, and focusing on support of business outcomes.

According to the report, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” regulations are often the biggest driver for buyers’ initial privacy tech purchases. Organizations also are deploying tools to mitigate potential harms from the use of data. However, buyers serving global markets increasingly need privacy tech that offers data availability and control and supports its utility, in addition to regulatory compliance. 

The report finds the COVID-19 pandemic has accelerated global marketplace adoption of privacy tech as dependence on digital technologies grows. Privacy is becoming a competitive differentiator in some sectors, and TechCrunch reports that 200+ privacy startups have together raised more than $3.5 billion over hundreds of individual rounds of funding….(More)”.

Privacy and Data Protection in Academia


Report by IAPP: “Today, demand for qualified privacy professionals is surging. Soon, societal, business and government needs for practitioners with expertise in the legal, technical and business underpinnings of data protection could far outstrip supply. To fill this gap, universities around the world are adding privacy curricula in their law, business and computer science schools. The IAPP’s Westin Research Center has catalogued these programs with the aim of promoting, catalyzing and supporting academia’s growing efforts to build an on-ramp to the privacy profession.

The information presented in our inaugural issue of “Privacy and Data Protection in Academia, A Global Guide to Curricula” represents the results of our publicly available survey. The programs included voluntarily completed the survey. The IAPP then organized the information provided and the designated contact at each institution verified the accu­racy of the information presented.

This is not a comprehen­sive list of colleges and universities offering privacy and data protection related curric­ula. We encourage higher education institu­tions interested in being included to com­plete the survey as the IAPP will periodically publish updates….(More)”.

Confronting Bias: BSA’s Framework to Build Trust in AI


BSA Software Alliance: “The Framework is a playbook organizations can use to enhance trust in their AI systems through risk management processes that promote fairness, transparency, and accountability. It can be leveraged by organizations that develop AI systems and companies that acquire and deploy such systems as the basis for:
– Internal Process Guidance. The Framework can be used as a tool for organizing and establishing roles,
responsibilities, and expectations for internal risk management processes.
– Training, Awareness, and Education. The Framework can be used to build internal training and education
programs for employees involved in developing and using AI systems, and for educating executives about
the organization’s approach to managing AI bias risks.
– Supply Chain Assurance and Accountability. AI developers and organizations that deploy AI
systems can use the Framework as a basis for communicating and coordinating about their respective roles and responsibilities for managing AI risks throughout a system’s lifecycle.
– Trust and Confidence. The Framework can help organizations communicate information about a
product’s features and its approach to mitigating AI bias risks to a public audience. In that sense, the
Framework can help organizations communicate to the public about their commitment to building
ethical AI systems.
– Incident Response. Following an unexpected incident, the processes and documentation set forth
in the Framework can serve as an audit trail that can help organizations quickly diagnose and remediate
potential problems…(More)”

Help us identify how data can make food healthier for us and the environment


The GovLab: “To make food production, distribution, and consumption healthier for people, animals, and the environment, we need to redesign today’s food systems. Data and data science can help us develop sustainable solutions — but only if we manage to define those questions that matter.

Globally, we are witnessing the damage that unsustainable farming practices have caused on the environment. At the same time, climate change is making our food systems more fragile, while the global population continues to rapidly increase. To feed everyone, we need to become more sustainable in our approach to producing, consuming, and disposing of food.

Policymakers and stakeholders need to work together to reimagine food systems and collectively make them more resilient, healthy, and inclusive.

Data will be integral to understanding where failures and vulnerabilities exist and what methods are needed to rectify them. Yet, the insights generated from data are only as good as the questions they seek to answer. To become smarter about current and future food systems using data, we need to ask the right questions first.

That’s where The 100 Questions Initiative comes in. It starts from the premise that to leverage data in a responsible and effective manner, data initiatives should be driven by demand, not supply. Working with a global cohort of experts, The 100 Questions seeks to map the most pressing and potentially impactful questions that data and data science can answer.

Today the Barilla Foundation, the Center for European Policy Studies, and The Governance Lab at NYU Tandon School of Engineering, are announcing the launch of the Food Systems Sustainability domain of The 100 Questions. We seek to identify the 10 most important questions that need to be answered to make food systems more sustainable…(More)”.

A fair data economy is built upon collaboration


Report by Heli Parikka, Tiina Härkönen and Jaana Sinipuro: “For a human-driven and fair data economy to work, it must be based on three important and interconnected aspects: regulation based on ethical values; technology; and new kinds of business models. With a human-driven approach, individual and social interests determine the business conditions and data is used to benefit individuals and society.

When developing a fair data economy, the aim has been to use existing technologies, operating models and concepts across the boundaries between different sectors. The goal is to enable not only new data-based business but also easier digital everyday life that is based on the more efficient and personal management of data. The human-driven approach is closely linked to the MyData concept.

At the beginning of the IHAN project, there were very few easy-to-use, individually tailored digital services. For example, the most significant data-based consumer services were designed on the basis of the needs of large corporations. To create demand, prevailing mindsets had to be changed and decision-makers needed to be encouraged to change direction, companies had to find new business with new business models and individuals had to be persuaded to demand change.

The terms and frameworks of the platform and data economies needed further clarification for the development of a fair data economy. We sought out examples from other sectors and found that, in addition to “human-driven”, another defining concept that emerged was “fair”, with fairness defined as a key goal in the IHAN project. A fair model also takes financial aspects into account and recognises the significance of companies and new services as a source of well-being.

Why did Sitra want to tackle this challenge to begin with? What had thus far been available to people was an unfair data economy model, which needed to be changed. The data economy direction had been defined by a handful of global companies, whose business models are based on collecting and managing data on their own platforms and on their own terms. There was a need to develop an alternative, a European data economy model.

One of the tasks of the future fund is to foresee future trends, the fair and human-driven use of data being one of them. The objective was to approach the theme in a pluralistic manner from the perspectives of different participants in society. Sitra’s unique position as an independent future fund made it possible to launch the project.

A fair data economy has become one of Sitra’s strategic spearheads and a new theme is being prepared at the time of the writing of this publication. The lessons learned and tools created so far will be moved under that theme and developed further, making them available to everyone who needs them….(More)“.

The Coronavirus Pandemic Creative Responses Archive


National Academies of Science: “Creativity often flourishes in stressful times because innovation evolves out of need. During the coronavirus pandemic, we are witnessing a range of creative responses from individuals, communities, organizations, and industries. Some are intensely personal, others expansively global—mirroring the many ways the pandemic has affected us. What do these responses to the pandemic tell us about our society, our level of resilience, and how we might imagine the future? Explore the Coronavirus Pandemic Creative Responses Archive…