UN urges moratorium on use of AI that imperils human rights


Jamey Keaten and Matt O’Brien at the Washington Post: “The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.

Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.

Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.

AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.

Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.

“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”

Bachelet didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards….(More)” (Report).

Introducing collective crisis intelligence


Blogpost by Annemarie Poorterman et al: “…It has been estimated that over 600,000 Syrians have been killed since the start of the civil war, including tens of thousands of civilians killed in airstrike attacks. Predicting where and when strikes will occur and issuing time-critical warnings enabling civilians to seek safety is an ongoing challenge. It was this problem that motivated the development of Sentry Syria, an early warning system that alerts citizens to a possible airstrike. Sentry uses acoustic sensor data, reports from on-the-ground volunteers, and open media ‘scraping’ to detect warplanes in flight. It uses historical data and AI to validate the information from these different data sources and then issues warnings to civilians 5-10 minutes in advance of a strike via social media, TV, radio and sirens. These extra minutes can be the difference between life and death.

Sentry Syria is just one example of an emerging approach in the humanitarian response we call collective crisis intelligence (CCI). CCI methods combine the collective intelligence (CI) of local community actors (e.g. volunteer plane spotters in the case of Sentry) with a wide range of additional data sources, artificial intelligence (AI) and predictive analytics to support crisis management and reduce the devastating impacts of humanitarian emergencies….(More)”

Enrollment algorithms are contributing to the crises of higher education


Paper by Alex Engler: “Hundreds of higher education institutions are procuring algorithms that strategically allocate scholarships to convince more students to enroll. In doing so, these enrollment management algorithms help colleges vary the cost of attendance to students’ willingness to pay, a crucial aspect of competition in the higher education market. This paper elaborates on the specific two-stage process by which these algorithms first predict how likely prospective students are to enroll, and second help decide how to disburse scholarships to convince more of those prospective students to attend the college. These algorithms are valuable to colleges for institutional planning and financial stability, as well as to help reach their preferred financial, demographic, and scholastic outcomes for the incoming student body.

Unfortunately, the widespread use of enrollment management algorithms may also be hurting students, especially due to their narrow focus on enrollment. The prevailing evidence suggests that these algorithms generally reduce the amount of scholarship funding offered to students. Further, algorithms excel at identifying a student’s exact willingness to pay, meaning they may drive enrollment while also reducing students’ chances to persist and graduate. The use of this two-step process also opens many subtle channels for algorithmic discrimination to perpetuate unfair financial aid practices. Higher education is already suffering from low graduation rates, high student debt, and stagnant inequality for racial minorities—crises that enrollment algorithms may be making worse.

This paper offers a range of recommendations to ameliorate the risks of enrollment management algorithms in higher education. Categorically, colleges should not use predicted likelihood to enroll in either the admissions process or in awarding need-based aid—these determinations should only be made based on the applicant’s merit and financial circumstances, respectively. When colleges do use algorithms to distribute scholarships, they should proceed cautiously and document their data, processes, and goals. Colleges should also examine how scholarship changes affect students’ likelihood to graduate, or whether they may deepen inequities between student populations. Colleges should also ensure an active role for humans in these processes, such as exclusively using people to evaluate application quality and hiring internal data scientists who can challenge algorithmic specifications. State policymakers should consider the expanding role of these algorithms too, and should try to create more transparency about their use in public institutions. More broadly, policymakers should consider enrollment management algorithms as a concerning symptom of pre-existing trends towards higher tuition, more debt, and reduced accessibility in higher education….(More)”.

The Future of Citizen Engagement: Rebuilding the Democratic Dialogue


Report by the Congressional Management Foundation: “The Future of Citizen Engagement: Rebuilding the Democratic Dialogue” explores the current challenges to engagement and trust between Senators and Representatives and their constituents; proposes principles for rebuilding that fundamental democratic relationship; and describes innovative practices in federal, state, local, and international venues that Congress could look to for modernizing the democratic dialogue.

cmf citizen engagement rebuilding democratic dialogue cover 200x259

The report answers the following questions:

  • What factors have contributed to the deteriorating state of communications between citizens and Congress?
  • What principles should guide Congress as it tries to transform its communications systems and practices from administrative transactions to substantive interactions with the People it represents?
  • What models at the state and international level can Congress follow as it modernizes and rebuilds the democratic dialogue?

The findings and recommendations in this report are based on CMF’s long history of researching the relationship between Members of Congress and their constituents…(More)”.

New report confirms positive momentum for EU open science


Press release: “The Commission released the results and datasets of a study monitoring the open access mandate in Horizon 2020. With a steadily increase over the years and an average success rate of 83% open access to scientific publications, the European Commission is at the forefront of research and innovation funders concluded the consortium formed by the analysis company PPMI (Lithuania), research and innovation centre Athena (Greece) and Maastricht University (the Netherlands).

The Commission sought advice on a process and reliable metrics through which to monitor all aspects of the open access requirements in Horizon 2020, and inform how to best do it for Horizon Europe – which has a more stringent and comprehensive set of rights and obligations for Open Science.

The key findings of the study indicate that the early European Commission’s leadership in the Open Science policy has paid off. The Excellent Science pillar in Horizon 2020 has led the success story, with an open access rate of 86%. Of the leaders within this pillar are the European Research Council (ERC) and the Future and Emerging Technologies (FET) programme, with open access rates of over 88%.

Other interesting facts:

  • In terms of article processing charges (APCs), the study estimated the average cost in Horizon 2020 of publishing an open access article to be around EUR 2,200.  APCs for articles published in ‘hybrid’ journals (a cost that will no longer be eligible under Horizon Europe), have a higher average cost of EUR 2,600
  • Compliance in terms of depositing open access publications in a repository (even when publishing open access through a journal) is relatively high (81.9%), indicating that the current policy of depositing is well understood and implemented by researchers.
  • Regarding licences, 49% of Horizon 2020 publications were published using Creative Commons (CC) licences, which permit reuse (with various levels of restrictions) while 33% use publisher-specific licences that place restrictions on text and data mining (TDM).
  • Institutional repositories have responded in a satisfactory manner to the challenge of providing FAIR access to their publications, amending internal processes and metadata to incorporate necessary changes: 95% of deposited publications include in their metadata some type of persistent identifier (PID).
  • Datasets in repositories present a low compliance level as only approximately 39% of Horizon 2020 deposited datasets are findable, (i.e., the metadata includes a PID and URL to the data file), and only around 32% of deposited datasets are accessible (i.e., the data file can be fetched using a URL link in the metadata).  Horizon Europe will hopefully allow to achieve better results.
  • The study also identified gaps in the existing Horizon 2020 open access monitoring data, which pose further difficulties in assessing compliance. Self-reporting by beneficiaries also highlighted a number of issues…(More)”

Commission publishes study on the impact of Open Source on the European economy


Press Release (European Commission): “It is estimated that companies located in the EU invested around €1 billion in Open Source Software in 2018, which brought about a positive impact on the European economy of between €65 and €95 billion.

The study predicts that an increase of 10% in contributions to Open Source Software code would annually generate an additional 0.4% to 0.6% GDP, as well as more than 600 additional ICT start-ups in the EU. Case studies reveal that by procuring Open Source Software instead of proprietary software, the public sector could reduce the total cost of ownership, avoid vendor lock-in and thus increase its digital autonomy.

The study gives a number of specific public policy recommendations aimed at achieving a digitally autonomous public sector, open research and innovation enabling European growth, and a digitised and internally competitive industry. In the long-term, the findings of the study may be used to reinforce the open source dimension in the development of future software and hardware policies for the EU industry.

Moreover, since October 2020 the Commission has its own new Open Source Software Strategy 2020-2023, which further encourages and leverages the transformative, innovative and collaborative potential of open source,  in view of achieving the goals of the overarching Digital Strategy of the Commission and contributing to the Digital Europe programme. The Commission’s Strategy puts a special emphasis on the sharing and reuse of software solutions, knowledge and expertise as well as on increasing the use of open source in information technologies and other strategic areas….(More)”.

Afyanet


About: “Afyanet is a voluntary, non-profit network of National Health Institutes and Research Centers seeking to leverage crowdsourced health data for disease surveillance and forecasting. Participation in AfyaNet for countries is free.

We aim to use technology and digital solutions to radically enhance how traditional disease surveillance systems function and the ways we can model epidemics.

Our vision is to create a common framework to collect standardized real-time data from the general population, allowing countries to leapfrog existing hurdles in disease surveillance and information sharing.

Our solution is an Early Warning System for Health based on participatory data gathering. A common, real-time framework for disease collection will help countries identify and forecast outbreaks faster and more effectively.

Crowdsourced data is gathered directly from citizens, then aggregated, anonymized, and processed in a cloud-based data lake. Our high-performance computing architecture analyzes the data and creates valuable disease spread models, which in turn provide alerts and notifications to participating countries and helps public health authorities make evidence-based decisions….(More)”

The geography of AI


Report by Mark Muro and Sifan Liu: “Much of the U.S. artificial intelligence (AI) discussion revolves around futuristic dreams of both utopia and dystopia. From extreme to extreme, the promises range from solutions to global climate change to a “robot apocalypse.”

However, it bears remembering that AI is also becoming a real-world economic fact with major implications for national and regional economic development as the U.S. crawls out of the COVID-19 pandemic.

Based on advanced uses of statistics, algorithms, and fast computer processing, AI has become a focal point of U.S. innovation debates. Even more, AI is increasingly viewed as the next great “general purpose technology”—one that has the power to boost the productivity of sector after sector of the economy.

All of which is why state and city leaders are increasingly assessing AI for its potential to spur economic growth. Such leaders are analyzing where their regions stand and what they need to do to ensure their locations are not left behind.

In response to such questions, this analysis examines the extent, location, and concentration of AI technology creation and business activity in U.S. metropolitan areas.

Employing seven basic measures of AI capacity, the report benchmarks regions on the basis of their core AI assets and capabilities as they relate to two basic dimensions: AI research and AI commercialization. In doing so, the assessment categorizes metro areas into five tiers of regional AI involvement and extracts four main findings reflecting that involvement…(More)”.

Empowered Data Societies: A Human-Centric Approach to Data Relationships


World Economic Forum: “Despite ever-increasing supply and demand, data often remains siloed and unavailable to those who seek to use it to benefit people, communities and society.

In this whitepaper, the World Economic Forum and the City of Helsinki propose a new, human-centric approach to making data better available. By prioritizing the values, needs and expectations of people, policy-makers can drive meaningful actions with positive outcomes for society while maintaining the utmost respect for the people who are part of it.

This paper provides frameworks, insights, and best practices for public sector employees and elected officials –from mayors and ministers to data scientists and service developers –to adapt and build systems that use data in responsible and innovative ways….(More)”.

Roadmap to social impact: Your step-by-step guide to planning, measuring and communicating social impact


Roadmap developed by Ioana Ramia, Abigail Powell, Katrina Stratton, Claire Stokes, Ariella Meltzer, and Kristy Muir: “…is a step-by-step guide to support you and your organisation through the process of outcomes measurement and evaluation.

While it’s not the silver bullet for outcomes measurement or impact assessment, The Roadmap provides you with eight steps to understand the context in which you operate, who you engage with and the social issue you are addressing, how you address this social issue, what the intended changes are, how and when to measure those changes and how to communicate and use your findings to further improve you work and social impact.

It introduces some established techniques for data collection and analysis, but it is not a guide to research methods. A list of resources is also provided at the end of the guide, including tools for stakeholder engagement, developing a survey, interview questionnaire and data analysis.

The Roadmap is for everyone working towards the creation of positive social impact in Australia who wants to measure the change they are making for individuals, organisations and communities….(More)”.