Report by the European Parliament Think Tank: “Regarding health data, its availability and comparability, the Covid-19 pandemic revealed that the EU has no clear health data architecture. The lack of harmonisation in these practices and the absence of an EU-level centre for data analysis and use to support a better response to public health crises is the focus of this study. Through extensive desk review, interviews with key actors, and enquiry into experiences from outside the EU/EEA area, this study highlights that the EU must have the capacity to use data very effectively in order to make data-supported public health policy proposals and inform political decisions. The possible functions and characteristics of an EU health data centre are outlined. The centre can only fulfil its mandate if it has the power and competency to influence Member State public-health-relevant data ecosystems and institutionally link with their national level actors. The institutional structure, its possible activities and in particular its usage of advanced technologies such as AI are examined in detail….(More)”.
Government data management for the digital age
Essay by Axel Domeyer, Solveigh Hieronimus, Julia Klier, and Thomas Weber: “Digital society’s lifeblood is data—and governments have lots of data, representing a significant latent source of value for both the public and private sectors. If used effectively, and keeping in mind ever-increasing requirements with regard to data protection and data privacy, data can simplify delivery of public services, reduce fraud and human error, and catalyze massive operational efficiencies.
Despite these potential benefits, governments around the world remain largely unable to capture the opportunity. The key reason is that data are typically dispersed across a fragmented landscape of registers (datasets used by government entities for a specific purpose), which are often managed in organizational silos. Data are routinely stored in formats that are hard to process or in places where digital access is impossible. The consequence is that data are not available where needed, progress on digital government is inhibited, and citizens have little transparency on what data the government stores about them or how it is used.
Only a handful of countries have taken significant steps toward addressing these challenges. As other governments consider their options, the experiences of these countries may provide them with valuable guidance and also reveal five actions that can help governments unlock the value that is on their doorsteps.
As societies take steps to enhance data management, questions on topics such as data ownership, privacy concerns, and appropriate measures against security breaches will need to be answered by each government. The purpose of this article is to outline the positive benefits of modern data management and provide a perspective on how to get there…(More)”.
Old Cracks, New Tech
Paper for the Oxford Commission on AI & Good Governance: “Artificial intelligence (AI) systems are increasingly touted as solutions to many complex social and political issues around the world, particularly in developing countries like Kenya. Yet AI has also exacerbated cleavages and divisions in society, in part because those who build the technology often do not have a strong understanding of the politics of the societies in which the technology is deployed.
In her new report ‘Old Cracks, New Tech: Artificial Intelligence, Human Rights, and Good Governance in Highly Fragmented and Socially Stratified Societies: The Case of Kenya’ writer and activist Nanjala Nyabola explores the Kenyan government’s policy on AI and blockchain technology and evaluates it’s success.Commissioned by the Oxford Commission for Good Governance (OxCAIGG), the report highlights lessons learnt from the Kenyan experience and sets out four key recommendations to help government officials and policy makers ensure good governance in AI in public and private contexts in Kenya.
The report recommends:
- Conducting a deeper and more wide-ranging analysis of the political implications of existing and proposed applications of AI in Kenya, including comparisons with other countries where similar technology has been deployed.
- Carrying out a comprehensive review of ongoing implementations of AI in both private and public contexts in Kenya in order to identify existing legal and policy gaps.
- Conducting deeper legal research into developing meaningful legislation to govern the development and deployment of AI technology in Kenya. In particular, a framework for the implementation of the Data Protection Act (2019) vis-à-vis AI and blockchain technology is urgently required.
- Arranging training for local political actors and researchers on the risks and opportunities for AI to empower them to independently evaluate proposed interventions with due attention to the local context…(More)”.
Urban Future with a Purpose
Deloitte Report: “The world is going through a transformative journey and so are cities. As centers of innovation and shared prosperity, cities are where the future happens first, hence envisioning the Future of Cities is anticipating the future of human living.
This report comprises 12 main trends that will affect our urban living in the upcoming future, its explanation, impact and key case studies where they are being implemented. Under this research project, Deloitte has listened to experts from all over the globe. These include Mayors of reference cities across the globe, international organizations leaders, urban policy institutions, as well as notorious urban planners, practitioners and researchers. Their views and insights offer further depth to our analysis.
Covering domains such as Mobility, Living & Health, Government & Education, Energy & Environment, Safety & Security and Economy, the purpose of our 360-degree comprehensive analysis is to create a constructive tool for everyone to use and practice what moves us day-by-day: foretell, design and build better cities….(More)”.
New York City to Require Food Delivery Services to Share Customer Data with Restaurants
Hunton Privacy Blog: “On August 29, 2021, a New York City Council bill amending the New York City Administrative Code to address customer data collected by food delivery services from online orders became law after the 30-day period for the mayor to sign or veto lapsed. Effective December 27, 2021, the law will permit restaurants to request customer data from third-party food delivery services and require delivery services to provide, on at least a monthly basis, such customer data until the restaurant “requests to no longer receive such customer data.” Customer data includes name, phone number, email address, delivery address and contents of the order.
Although customers are permitted to request that their customer data not be shared, the presumption under the law is that “customers have consented to the sharing of such customer data applicable to all online orders, unless the customer has made such a request in relation to a specific online order.” The food delivery services are required to provide on its website a way for customers to request that their data not be shared “in relation to such online order.” To “assist its customers with deciding whether their data should be shared,” delivery services must disclose to the customer (1) the data that may be shared with the restaurant and (2) the restaurant fulfilling the order as the recipient of the data.
The law will permit restaurants to use the customer data for marketing and other purposes, and prohibit delivery apps from restricting such activities by restaurants. Restaurants that receive the customer data, however, must allow customers to request and delete their customer data. In addition, restaurants are not permitted to sell, rent or disclose customer data to any other party in exchange for financial benefit, except with the express consent of the customer….(More)”.
Exploring a new governance agenda: What are the questions that matter?
Article by Nicola Nixon, Stefaan Verhulst, Imran Matin & Philips J. Vermonte: “…Late last year, we – the Governance Lab at NYU, the CSIS Indonesia, the BRAC Institute of Governance and Development, Bangladesh and The Asia Foundation – joined forces across New York, Jakarta, Dhaka, Hanoi, and San Francisco to launch the 100 Governance Questions Initiative. This is the latest iteration of the GovLab’s broader initiative to map questions across several domains.
We live in an era marked by an unprecedented amount of data. Anyone who uses a mobile phone or accesses the internet is generating vast streams of information. Covid-19 has only intensified this phenomenon.
Although this data contains tremendous potential for positive social transformation, much of that potential goes unfulfilled. In the development context, one chief problem is that data initiatives are often driven by supply (i.e., what data or data solutions are available?) rather than demand (what problems actually need solutions?). Too many projects begin with the database, the app, the dashboard–beholden to the seduction of technology– and now, many parts of the developing world are graveyards of tech pilots. As is well established in development theory but not yet fully in practice, solution-driven governance interventions are destined to fail.
The 100 Questions Initiative, pioneered by the GovLab, seeks to overcome the chasm between supply and demand. It begins not by searching for what data is available, but by asking important questions about the biggest challenges societies and countries face, and then seeking more targeted and relevant data solutions. In doing this, it narrows the gap between policy makers and constituents, providing opportunities for improved evidence-based policy and community engagement in developing countries. As part of this initiative, we seek to define the ten most important questions across several domains, including Migration, Gender, Employment, the Future of Work, and—now–Governance.
On this occasion, we invited over 100 experts and practitioners in governance and data science –whom we call “bilinguals”– from various organizations, companies, and government agencies to identify what they see as the most pressing governance questions in their respective domains. Over 100 bilinguals were encouraged to prioritize potential impact, novelty, and feasibility in their questioning — moving toward a roadmap for data-driven action and collaboration that is both actionable and ambitious.
By June, the bilinguals had articulated 170 governance-related questions. Over the next couple of months, these were sorted, discussed and refined during two rounds of collaboration with the bilinguals; first to narrow down to the top 40 and then to the top 10. Bilinguals were asked what, to them, are the most significant governance questions we must answer with data today? The result is the following 10 questions:…(More)” ( Public Voting Platform)”.
Climate change versus children: How a UNICEF data collaborative gave birth to a risk index
Jess Middleton at DataIQ: “Almost a billion children face climate-related disasters in their lifetime, according to UNICEF’s new Children’s Climate Risk Index (CCRI).
The CCRI is the first analysis of climate risk specifically from a child’s perspective. It reveals that children in Central African Republic, Chad and Nigeria are at the highest risk from climate and environmental shocks based on their access to essential services….
Young climate activists including Greta Thunberg contributed a foreword to the report that introduced the index; and the project has added another layer of pressure on governments failing to act on climate change in the run-up to the 2021 United Nations Climate Change Conference – set to be held in Glasgow in November.
While these statistics make for grim reading, the collective effort undertaken to create the Index is evidence of the power of data as a tool for advocacy and the role that data collaboratives can play in shaping positive change.
The CCRI is underpinned by data that was sourced, collated and analysed by the Data for Children Collaborative with UNICEF, a partnership between UNICEF, the Scottish Government and University of Edinburgh hosted by The Data Lab.
The collaboration brings together practitioners from diverse backgrounds to provide data-driven solutions to issues faced by children around the world.
For work on the CCRI, the collaborative sought data, skills and expertise from academia (Universities of Southampton, Edinburgh, Stirling, Highlands and Islands) as well as the public and private sectors (ONS-FCDO Data Science Hub, Scottish Alliance for Geoscience, Environment & Society).
This variety of expertise provided the knowledge required to build the two main pillars of input for the CCRI: socioeconomic and climate science data.
Socioeconomic experts sourced data and provided analytical expertise in the context of child vulnerability, social statistics, biophysical processes and statistics, child welfare and child poverty.
Climate experts focused on factors such as water scarcity, flood exposure, coastal flood risk, pollution and exposure to vector borne disease.
The success of the project hinged on the effective collaboration between distinct areas of expertise to deliver on UNICEF’s problem statement.
The director of the Data for Children Collaborative with UNICEF, Alex Hutchison, spoke with DataIQ about the success of the project, the challenges the team faced, and the benefits of working as part of a diverse collective….(More). (Report)”
UN urges moratorium on use of AI that imperils human rights
Jamey Keaten and Matt O’Brien at the Washington Post: “The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.
Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.
Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.
AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.
Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.
“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”
Bachelet didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards….(More)” (Report).
Introducing collective crisis intelligence
Blogpost by Annemarie Poorterman et al: “…It has been estimated that over 600,000 Syrians have been killed since the start of the civil war, including tens of thousands of civilians killed in airstrike attacks. Predicting where and when strikes will occur and issuing time-critical warnings enabling civilians to seek safety is an ongoing challenge. It was this problem that motivated the development of Sentry Syria, an early warning system that alerts citizens to a possible airstrike. Sentry uses acoustic sensor data, reports from on-the-ground volunteers, and open media ‘scraping’ to detect warplanes in flight. It uses historical data and AI to validate the information from these different data sources and then issues warnings to civilians 5-10 minutes in advance of a strike via social media, TV, radio and sirens. These extra minutes can be the difference between life and death.
Sentry Syria is just one example of an emerging approach in the humanitarian response we call collective crisis intelligence (CCI). CCI methods combine the collective intelligence (CI) of local community actors (e.g. volunteer plane spotters in the case of Sentry) with a wide range of additional data sources, artificial intelligence (AI) and predictive analytics to support crisis management and reduce the devastating impacts of humanitarian emergencies….(More)”
Enrollment algorithms are contributing to the crises of higher education
Paper by Alex Engler: “Hundreds of higher education institutions are procuring algorithms that strategically allocate scholarships to convince more students to enroll. In doing so, these enrollment management algorithms help colleges vary the cost of attendance to students’ willingness to pay, a crucial aspect of competition in the higher education market. This paper elaborates on the specific two-stage process by which these algorithms first predict how likely prospective students are to enroll, and second help decide how to disburse scholarships to convince more of those prospective students to attend the college. These algorithms are valuable to colleges for institutional planning and financial stability, as well as to help reach their preferred financial, demographic, and scholastic outcomes for the incoming student body.
Unfortunately, the widespread use of enrollment management algorithms may also be hurting students, especially due to their narrow focus on enrollment. The prevailing evidence suggests that these algorithms generally reduce the amount of scholarship funding offered to students. Further, algorithms excel at identifying a student’s exact willingness to pay, meaning they may drive enrollment while also reducing students’ chances to persist and graduate. The use of this two-step process also opens many subtle channels for algorithmic discrimination to perpetuate unfair financial aid practices. Higher education is already suffering from low graduation rates, high student debt, and stagnant inequality for racial minorities—crises that enrollment algorithms may be making worse.
This paper offers a range of recommendations to ameliorate the risks of enrollment management algorithms in higher education. Categorically, colleges should not use predicted likelihood to enroll in either the admissions process or in awarding need-based aid—these determinations should only be made based on the applicant’s merit and financial circumstances, respectively. When colleges do use algorithms to distribute scholarships, they should proceed cautiously and document their data, processes, and goals. Colleges should also examine how scholarship changes affect students’ likelihood to graduate, or whether they may deepen inequities between student populations. Colleges should also ensure an active role for humans in these processes, such as exclusively using people to evaluate application quality and hiring internal data scientists who can challenge algorithmic specifications. State policymakers should consider the expanding role of these algorithms too, and should try to create more transparency about their use in public institutions. More broadly, policymakers should consider enrollment management algorithms as a concerning symptom of pre-existing trends towards higher tuition, more debt, and reduced accessibility in higher education….(More)”.