WHO, Germany open Hub for Pandemic and Epidemic Intelligence in Berlin


Press Release: “To better prepare and protect the world from global disease threats, H.E. German Federal Chancellor Dr Angela Merkel and Dr Tedros Adhanom Ghebreyesus, World Health Organization Director-General, will today inaugurate the new WHO Hub for Pandemic and Epidemic Intelligence, based in Berlin. 

“The world needs to be able to detect new events with pandemic potential and to monitor disease control measures on a real-time basis to create effective pandemic and epidemic risk management,” said Dr Tedros. “This Hub will be key to that effort, leveraging innovations in data science for public health surveillance and response, and creating systems whereby we can share and expand expertise in this area globally.” 

The WHO Hub, which is receiving an initial investment of US$ 100 million from the Federal Republic of Germany, will harness broad and diverse partnerships across many professional disciplines, and the latest technology, to link the data, tools and communities of practice so that actionable data and intelligence are shared for the common good.

The  WHO Hub is part of WHO’s Health Emergencies Programme and will be a new collaboration of countries and partners worldwide, driving innovations to increase availability of key data; develop state of the art analytic tools and predictive models for risk analysis; and link communities of practice around the world. Critically, the WHO Hub will support the work of public health experts and policy-makers in all countries with the tools needed to forecast, detect and assess epidemic and pandemic risks so they can take rapid decisions to prevent and respond to future public health emergencies.

“Despite decades of investment, COVID-19 has revealed the great gaps that exist in the world’s ability to forecast, detect, assess and respond to outbreaks that threaten people worldwide,” said Dr Michael Ryan, Executive Director of WHO’s Health Emergency Programme. “The WHO Hub for Pandemic and Epidemic Intelligence is designed to develop the data access, analytic tools and communities of practice to fill these very gaps, promote collaboration and sharing, and protect the world from such crises in the future.” 

The Hub will work to:

  • Enhance methods for access to multiple data sources vital to generating signals and insights on disease emergence, evolution and impact;
  • Develop state of the art tools to process, analyze and model data for detection, assessment and response;
  • Provide WHO, our Member States, and partners with these tools to underpin better, faster decisions on how to address outbreak signals and events; and
  • Connect and catalyze institutions and networks developing disease outbreak solutions for the present and future.

Dr Chikwe Ihekweazu, currently Director-General of the Nigeria Centre for Disease Control, has been appointed to lead the WHO Hub….(More)” 

Alliance formed to create new professional standards for data science


Press Release: “A new alliance has been formed to create industry-wide professional standards for data science. ‘The Alliance for Data Science Professionals’ is defining the standards needed to ensure an ethical and well-governed approach so the public, organisations and governments can have confidence in how their data is being used. 

While the skills of data scientists are increasingly in demand, there is currently no professional framework for those working in the field. These new industry-wide standards, which will be finalised by the autumn, look to address current issues, such as data breaches, the misuse of data in modelling and bias in artificial intelligence. They can give people confidence that their data is being used ethically, stored safely and analysed robustly. 

The Alliance members, who initially convened in July 2020, are the Royal Statistical Society, BCS, The Chartered Institute for IT, the Operational Research Society, the Institute of Mathematics and its Applications, the Alan Turing Institute and the National Physical Laboratory (NPL). They are supported by the Royal Academy of Engineering and the Royal Society.  

Since convening, the Alliance has worked with volunteers and stakeholders to develop draft standards for individuals, standards for universities seeking accreditation of their courses and a certification process that will enable both individuals and education providers to gain recognition based on skills and knowledge within data science.  

Governed by a memorandum of understanding, the Alliance is committed to:  

  • Defining the standards of professional competence and behaviour expected of people who work with data which impacts life and livelihoods. These include data scientists, data engineers, data analysts and data stewards.  
  • Using an open-source process to maintain and update the standards. 
  • Delivering these standards as data science certifications offered by the Alliance members to their professional members, with processes to hold certified members accountable for their professional status in this area. 
  • Using these standards as criteria for Alliance members to accredit data science degrees, and data science modules of associated degrees, as contributing to certification. 
  • Creating a single searchable public register of certified data science professionals….(More)”.

Leveraging Digitalisation for a Resilient, Strong, Sustainable and Inclusive Recovery


G20 Declaration: “…We recognise the importance of data-driven innovation and the growing demand of data across society. Coherent and responsible data governance frameworks that guide the reuse and sharing of data should ensure confidence and security, privacy, personal data protection and the protection and enforcement of intellectual property rights, taking into account differences in national legal systems. This could be accompanied by policies that foster investments in data infrastructure and architecture that have positive spillovers across industries and society. Increased, open and accessible government data could help encourage innovation, in particular among MSMEs….

We call for close coordination to promote statistical guidance and move from outcome measures of the digital gender divide to the analysis of enabling and disabling factors. To this end, we acknowledge the importance of developing sound statistical infrastructures, including through dedicated statistical surveys, appropriate domestic, national and international legal and technical frameworks for data access and use, while protecting personal data and privacy, strengthening of NSOs’ capabilities in using linked data, increased availability of open data, and enhanced collaboration with the private sector and relevant stakeholders, including in exploring alternative sources of data and data collection practices…

Moreover, rapid technological development in emerging technologies can offer the potential to transform the way in which G20 governments design and deliver public policies and services. We reaffirm our commitment to foster the conditions and competencies necessary to unlock the potential of digital technologies and data in order to ensure the resilience, security, human centricity, and sustainability of our governments, while managing risks related to security, data protection, including personal data, and privacy, and bias in algorithms. Particular attention should be paid to bridging all kinds of digital divides….(More) (PressRelease)”.

New Study Uses Crowdsourcing to Strengthen American Democracy


Press Release: “Americans have always disagreed about politics, but now levels of anti-democratic attitudes, support for partisan violence, and partisan animosity have reached concerning levels. While there are many ideas for tackling these problems, they have never been gathered, tested, and evaluated in a unified effort. To address this gap, the Stanford Polarization and Social Change Lab is launching a major new initiative. The Strengthening Democracy Challenge will collect and rigorously test up to 25 interventions to reduce anti-democratic attitudes, support for partisan violence, and partisan animosity in one massive online experiment with up to 30,000 participants. Interventions can be contributed by academics, practitioners, or others with interest in strengthening democratic principles in the US. The researchers who organize the challenge — a multidisciplinary team with members at Stanford, MIT, Northwestern, and Columbia Universities — believe that crowdsourcing ideas, combined with the rigor of large-scale experimentation, can help address issues as substantial and complex as these….

Researchers with diverse backgrounds and perspectives are invited to submit interventions. The proposed interventions must be short, doable in an online form, and follow the ethical guidelines of the challenge. Academic and practitioner experts will rate the submissions and an editorial board will narrow down the 25 best submissions to be tested, taking novelty and expected success of the ideas into account. Co-organizers of the challenge include James Druckman, Payson S. Wild Professor of Political Science at Northwestern University; David Rand, the Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences at MIT; James Chu, Assistant Professor of Sociology at Columbia University; and Nick Stagnaro, Post-Doctoral Fellow at MIT. The organizing team is supported by Polarization and Social Change Lab’s Chrystal RedekoppJoe Mernyk, and Sophia Pink.

The study participants will be a large sample of up to 30,000 self-identified Republicans and Democrats, nationally representative on several major demographic benchmarks….(More)”.

Commission proposes measures to boost data sharing and support European data spaces


Press Release: “To better exploit the potential of ever-growing data in a trustworthy European framework, the Commission today proposes new rules on data governance. The Regulation will facilitate data sharing across the EU and between sectors to create wealth for society, increase control and trust of both citizens and companies regarding their data, and offer an alternative European model to data handling practice of major tech platforms.

The amount of data generated by public bodies, businesses and citizens is constantly growing. It is expected to multiply by five between 2018 and 2025. These new rules will allow this data to be harnessed and will pave the way for sectoral European data spaces to benefit society, citizens and companies. In the Commission’s data strategy of February this year, nine such data spaces have been proposed, ranging from industry to energy, and from health to the European Green Deal. They will, for example, contribute to the green transition by improving the management of energy consumption, make delivery of personalised medicine a reality, and facilitate access to public services.

The Regulation includes:

  • A number of measures to increase trust in data sharing, as the lack of trust is currently a major obstacle and results in high costs.
  • Create new EU rules on neutrality to allow novel data intermediaries to function as trustworthy organisers of data sharing.
  • Measures to facilitate the reuse of certain data held by the public sector. For example, the reuse of health data could advance research to find cures for rare or chronic diseases.
  • Means to give Europeans control on the use of the data they generate, by making it easier and safer for companies and individuals to voluntarily make their data available for the wider common good under clear conditions….(More)”.

European Health Data Space


European Commission Press Release: “The set-up of the European Health Data Space will be an integral part of building a European Health Union, a process launched by the Commission today with a first set of proposals to reinforce preparedness and response during health crisis. This  is also a direct follow up of the Data strategy adopted by the Commission in February this year, where the Commission had already stressed the importance of creating European data spaces, including on health….

In this perspective, as part of the implementation of the Data strategy, a data governance act is set to be presented still this year, which will support the reuse of public sensitive data such as health data. A dedicated legislative proposal on a European health data space is planned for next year, as set out in the 2021 Commission work programme.

As first steps, the following activities starting in 2021 will pave the way for better data-driven health care in Europe:

  • The Commission proposes a European Health Data Space in 2021;
  • A Joint Action with 22 Member States to propose options on governance, infrastructure, data quality and data solidarity and empowering citizens with regards to secondary health data use in the EU;
  • Investments to support the European Health Data Space under the EU4Health programme, as well as common data spaces and digital health related innovation under Horizon Europe and the Digital Europe programmes;
  • Engagement with relevant actors to develop targeted Codes of Conduct for secondary health data use;
  • A pilot project, to demonstrate the feasibility of cross border analysis for healthcare improvement, regulation and innovation;
  • Other EU funding opportunities for digital transformation of health and care will be available for Member States as of 2021 under Recovery and Resilience Facility, European Regional Development Fund, European Social Fund+, InvestEU.

The set of proposals adopted by the Commission today to strengthen the EU’s crisis preparedness and response, taking the first steps towards a European Health Union, also pave the way for the participation of the European Medicines Agency (EMA) and the European Centre for Disease Prevention and Control (ECDC) in the future European Health Data Space infrastructure, along with research institutes, public health bodies, and data permit authorities in the Member States….(More)”.

NIH Releases New Policy for Data Management and Sharing


NIH Blogpost by Carrie Wolinetz: “Today, nearly twenty years after the publication of the Final NIH Statement on Sharing Research Data in 2003, we have released a Final NIH Policy for Data Management and Sharing. This represents the agency’s continued commitment to share and make broadly available the results of publicly funded biomedical research. We hope it will be a critical step in moving towards a culture change, in which data management and sharing is seen as integral to the conduct of research. Responsible data management and sharing is good for science; it maximizes availability of data to the best and brightest minds, underlies reproducibility, honors the participation of human participants by ensuring their data is both protected and fully utilized, and provides an element of transparency to ensure public trust and accountability.

This policy has been years in the making and has benefited enormously from feedback and input from stakeholders throughout the process. We are grateful to all those who took the time to comment on Request for Information, the Draft policy, or to participate in workshops or Tribal consultations. That thoughtful feedback has helped shape the Final policy, which we believe strikes a balance between reasonable expectations for data sharing and flexibility to allow for a diversity of data types and circumstances. How we incorporated public comments and decision points that led to the Final policy are detailed in the Preamble to the DMS policy.

The Final policy applies to all research funded or conducted by NIH that results in the generation of scientific data. The Final Policy has two main requirements (1) the submission of a Data Management and Sharing Plan (Plan); and (2) compliance with the approved Plan. We are asking for Plans at the time of submission of the application, because we believe planning and budgeting for data management and sharing needs to occur hand in hand with planning the research itself. NIH recognizes that science evolves throughout the research process, which is why we have built in the ability to update DMS Plans, but at the end of the day, we are expecting investigators and institutions to be accountable to the Plans they have laid out for themselves….

Anticipating that variation in readiness, and in recognition of the cultural change we are trying to seed, there is a two-year implementation period. This time will be spent developing the information, support, and tools that the biomedical enterprise will need to comply with this new policy. NIH has already provided additional supplementary information – on (1) elements of a data management and sharing plan; (2) allowable costs; and (3) selecting a data repository – in concert with the policy release….(More)”

New mathematical idea reins in AI bias towards making unethical and costly commercial choices


The University of Warwick: “Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices—an ethical eye on AI.

Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.

The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used—regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you—or both.

So in an environment in which decisions are increasingly made without human intervention, there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.

Mathematicians and statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to help business and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact. They have laid out the full details in a paper bearing the name “An unethical optimization principle“, published in Royal Society Open Science on Wednesday 1st July 2020….(More)”.

Techlash? America’s Growing Concern with Major Technology Companies


Press Release: “Just a few years ago, Americans were overwhelmingly optimistic about the power of new technologies to foster an informed and engaged society. More recently, however, that confidence has been challenged by emerging concerns over the role that internet and technology companies — especially social media — now play in our democracy.

A new Knight Foundation and Gallup study explores how much the landscape has shifted. This wide-ranging study confirms that, for Americans, the techlash is real, widespread, and bipartisan. From concerns about the spread of misinformation to election interference and data privacy, we’ve documented the deep pessimism of folks across the political spectrum who believe tech companies have too much power — and that they do more harm than good. 

Despite their shared misgivings, Americans are deeply divided on how best to address these challenges. This report explores the contours of the techlash in the context of the issues currently animating policy debates in Washington and Silicon Valley. Below are the main findings from the executive summary….

  • 77% of Americans say major internet and technology companies like Facebook, Google, Amazon and Apple have too muchpower.
  • Americans are equally divided among those who favor (50%) and oppose (49%) government intervention that would require internet and technology companies to break into smaller companies. 
  • Americans do not trust social media companies much (44%) or at all (40%) to make the right decisions about what content should or should not be allowed on online platforms.
  • However, they would still prefer the companies (55%) to make those decisions rather than the government (44%). …(More)

New privacy-protected Facebook data for independent research on social media’s impact on democracy


Chaya Nayak at Facebook: “In 2018, Facebook began an initiative to support independent academic research on social media’s role in elections and democracy. This first-of-its-kind project seeks to provide researchers access to privacy-preserving data sets in order to support research on these important topics.

Today, we are announcing that we have substantially increased the amount of data we’re providing to 60 academic researchers across 17 labs and 30 universities around the world. This release delivers on the commitment we made in July 2018 to share a data set that enables researchers to study information and misinformation on Facebook, while also ensuring that we protect the privacy of our users.

This new data release supplants data we released in the fall of 2019. That 2019 data set consisted of links that had been shared publicly on Facebook by at least 100 unique Facebook users. It included information about share counts, ratings by Facebook’s third-party fact-checkers, and user reporting on spam, hate speech, and false news associated with those links. We have expanded the data set to now include more than 38 million unique links with new aggregated information to help academic researchers analyze how many people saw these links on Facebook and how they interacted with that content – including views, clicks, shares, likes, and other reactions. We’ve also aggregated these shares by age, gender, country, and month. And, we have expanded the time frame covered by the data from January 2017 – February 2019 to January 2017 – August 2019.

With this data, researchers will be able to understand important aspects of how social media shapes our world. They’ll be able to make progress on the research questions they proposed, such as “how to characterize mainstream and non-mainstream online news sources in social media” and “studying polarization, misinformation, and manipulation across multiple platforms and the larger information ecosystem.”

In addition to the data set of URLs, researchers will continue to have access to CrowdTangle and Facebook’s Ad Library API to augment their analyses. Per the original plan for this project, outside of a limited review to ensure that no confidential or user data is inadvertently released, these researchers will be able to publish their findings without approval from Facebook.

We are sharing this data with researchers while continuing to prioritize the privacy of people who use our services. This new data set, like the data we released before it, is protected by a method known as differential privacy. Researchers have access to data tables from which they can learn about aggregated groups, but where they cannot identify any individual user. As Harvard University’s Privacy Tools project puts it:

“The guarantee of a differentially private algorithm is that its behavior hardly changes when a single individual joins or leaves the dataset — anything the algorithm might output on a database containing some individual’s information is almost as likely to have come from a database without that individual’s information. … This gives a formal guarantee that individual-level information about participants in the database is not leaked.” …(More)”