Survey: Majority of Americans Willing to Share Their Most Sensitive Personal Data


Center for Data Innovation: “Most Americans (58 percent) are willing to allow third parties to collect at least some sensitive personal data, according to a new survey from the Center for Data Innovation.

While many surveys measure public opinions on privacy, few ask consumers about their willingness to make tradeoffs, such as sharing certain personal information in exchange for services or benefits they want. In this survey, the Center asked respondents whether they would allow a mobile app to collect their biometrics or location data for purposes such as making it easier to sign into an account or getting free navigational help, and it asked whether they would allow medical researchers to collect sensitive data about their health if it would lead to medical cures for their families or others. Only one-third of respondents (33 percent) were unwilling to let mobile apps collect either their biometrics or location data under any of the described scenarios. And overall, nearly 6 in 10 respondents (58 percent) were willing to let a third party collect at least one piece of sensitive personal data, such as biometric, location, or medical data, in exchange for a service or benefit….(More)”.

How Data Sharing Can Improve Frontline Worker Development


Digital Promise: “Frontline workers, or the workers who interact directly with customers and provide services in industries like retail, healthcare, food service, and hospitality, help make up the backbone of today’s workforce.

However, frontline workforce talent development presents numerous challenges. Frontline workers may not be receiving the education and training they need to advance in their careers and sustain gainful employment. They also likely do not have access to data regarding their own skills and learning, and do not know what skills employers seek in quality workers.

Today, Digital Promise, a nonprofit authorized by Congress to support comprehensive research and development of programs to advance innovation in education, launched “Tapping Data for Frontline Talent Development,” a new, interactive report that shares how the seamless and secure sharing of data is key to creating more effective learning and career pathways for frontline service workers.

The research revealed that the current learning ecosystem that serves frontline workers—which includes stakeholders like education and training providers, funders, and employers—is complex, siloed, and removes agency from the worker.

Although many data types are collected, in today’s system much of the data is duplicative and rarely used to inform impact and long-term outcomes. The processes and systems in the ecosystem do not support the flow of data between stakeholders or frontline workers.

And yet, data sharing systems and collaborations are beginning to emerge as providers, funders, and employers recognize the power in data-driven decision-making and the benefits to data sharing. Not only can data sharing help to improve programs and services, it can create more personalized interventions for education providers supporting frontline workers, and it can also improve talent pipelines for employers.

In addition to providing three case studies with valuable examples of employersa community, and a state focused on driving change based on data, this new report identifies key recommendations that have the potential to move the current system toward a more data-driven, collaborative, worker-centered learning ecosystem, including:

  1. Creating awareness and demand among stakeholders
  2. Ensuring equity and inclusion for workers/learners through access and awareness
  3. Creating data sharing resources
  4. Advocating for data standards
  5. Advocating for policies and incentives
  6. Spurring the creation of technology systems that enable data sharing/interoperability

We invite you to read our new report today for more information, and sign up for updates on this important work….(More)”

Whatever happened to evidence-based policy making?


Speech by Professor Gary Banks: “One of the challenges in talking about EBPM (evidence-based policy making), which I had not fully appreciated last time, was that it means different things to different people, especially academics. As a result, disagreements, misunderstandings and controversies (or faux controversies) have abounded. And these may have contributed to the demise of the expression, if not the concept.

For example, some have interpreted the term EBPM so literally as to insist that the word “based” be replaced by “influenced”, arguing that policy decisions are rarely based on evidence alone. That of course is true, but few using the term (myself included) would have thought otherwise. And I am sure no-one in an audience such as this, especially in our nation’s capital, believes policy decisions could derive solely from evidence — or even rational analysis!

If you’ll pardon a quotation from my earlier address: “Values, interests, personalities, timing, circumstance and happenstance – in short, democracy – determine what actually happens” (EBPM: What is it? How do we get it?). Indeed it is precisely because of such multiple influences, that “evidence” has a potentially significant role to play.

So, adopting the position from Alice in Wonderland, I am inclined to stick with the term EBPM, which I choose to mean an approach to policy-making that makes systematic provision for evidence and analysis. Far from the deterministic straw man depicted in certain academic articles, it is an approach that seeks to achieve policy decisions that are better informed in a substantive sense, accepting that they will nevertheless ultimately be – and in a democracy need to be — political in nature.

A second and more significant area of debate concerns the meaning and value of “evidence” itself. There are a number of strands involved.

Evidentiary elitism?

One relates to methodology, and can be likened to the differences between the thresholds for a finding of guilt under civil and criminal law (“balance of probabilities” versus “beyond reasonable doubt”).

Some analysts have argued that, to be useful for policy, evidence must involve rigorous unbiased research techniques, the “gold standard” for which are “randomized control trials”. The “randomistas”, to use the term which headlines Andrew Leigh’s new book (Leigh, 2018), claim that only such a methodology is able to truly tell us “what works”

However adopting this exacting standard from the medical research world would leave policy makers with an excellent tool of limited application. Its forte is testing a specific policy or program relative to business as usual, akin to drug tests involving a placebo for a control group. And there are some inspiring examples of insights gained. But for many areas of public policy the technique is not practicable. Even where it is, it requires that a case has to some extent already been made. And while it can identify the extent to which a particular program “works”, it is less useful for understanding why, or whether something else might work even better.

That is not to say that any evidence will do. Setting the quality bar too low is the bigger problem in practice and the notion of a hierarchy of methodologies is helpful. However, no such analytical tools are self-sufficient for policy-making purposes and in my view are best thought of as components of a “cost benefit framework” – one that enables comparisons of different options, employing those estimation techniques that are most fit for purpose. Though challenging to populate fully with monetized data, CBA provides a coherent conceptual basis for assessing the net social impacts of different policy choices – which is what EBPM must aspire to as its contribution to (political) policy decisions….(More)”.

Blockchain’s Occam problem


Report by Matt Higginson, Marie-Claude Nadeau, and Kausik Rajgopal: “Blockchain has yet to become the game-changer some expected. A key to finding the value is to apply the technology only when it is the simplest solution available.

Blockchain over recent years has been extolled as a revolution in business technology. In the nine years since its launch, companies, regulators, and financial technologists have spent countless hours exploring its potential. The resulting innovations have started to reshape business processes, particularly in accounting and transactions.

Amid intense experimentation, industries from financial services to healthcare and the arts have identified more than 100 blockchain use cases. These range from new land registries, to KYC applications and smart contracts that enable actions from product processing to share trading. The most impressive results have seen blockchains used to store information, cut out intermediaries, and enable greater coordination between companies, for example in relation to data standards….

There is a clear sense that blockchain is a potential game-changer. However, there are also emerging doubts. A particular concern, given the amount of money and time spent, is that little of substance has been achieved. Of the many use cases, a large number are still at the idea stage, while others are in development but with no output. The bottom line is that despite billions of dollars of investment, and nearly as many headlines, evidence for a practical scalable use for blockchain is thin on the ground.

Infant technology

From an economic theory perspective, the stuttering blockchain development path is not entirely surprising. It is an infant technology that is relatively unstable, expensive, and complex. It is also unregulated and selectively distrusted. Classic lifecycle theory suggests the evolution of any industry or product can be divided into four stages: pioneering, growth, maturity, and decline (exhibit). Stage 1 is when the industry is getting started, or a particular product is brought to market. This is ahead of proven demand and often before the technology has been fully tested. Sales tend to be low and return on investment is negative. Stage 2 is when demand begins to accelerate, the market expands and the industry or product “takes off.”

Blockchain is struggling to emerge from the pioneering stage.
Exhibit

Across its many applications, blockchain arguably remains stuck at stage 1 in the lifecycle (with a few exceptions). The vast majority of proofs of concept (POCs) are in pioneering mode (or being wound up) and many projects have failed to get to Series C funding rounds.

One reason for the lack of progress is the emergence of competing technologies. In payments, for example, it makes sense that a shared ledger could replace the current highly intermediated system. However, blockchains are not the only game in town. Numerous fintechs are disrupting the value chain. Of nearly $12 billion invested in US fintechs last year, 60 percent was focused on payments and lending. SWIFT’s global payments innovation initiative (GPI), meanwhile, is addressing initial pain points through higher transaction speeds and increased transparency, building on bank collaboration….(More)” (See also: Blockchange)

A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility Within a Human Rights Framework


Report by Karen Yeung: “This study was commissioned by the Council of Europe’s Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT). It was prompted by concerns about the potential adverse consequences of advanced digital technologies (including artificial intelligence (‘AI’)), particularly their impact on the enjoyment of human rights and fundamental freedoms. This draft report seeks to examine the implications of these technologies for the concept of responsibility, and this includes investigating where responsibility should lie for their adverse consequences. In so doing, it seeks to understand (a) how human rights and fundamental freedoms protected under the ECHR may be adversely affected by the development of AI technologies and (b) how responsibility for those risks and consequences should be allocated. 

Its methodological approach is interdisciplinary, drawing on concepts and academic scholarship from the humanities, the social sciences and, to a more limited extent, from computer science. It concludes that, if we are to take human rights seriously in a hyperconnected digital age, we cannot allow the power of our advanced digital technologies and systems, and those who develop and implement them, to be accrued and exercised without responsibility. Nations committed to protecting human rights must therefore ensure that those who wield and derive benefits from developing and deploying these technologies are held responsible for their risks and consequences. This includes obligations to ensure that there are effective and legitimate mechanisms that will operate to prevent and forestall violations to human rights which these technologies may threaten, and to attend to the health of the larger collective and shared socio-technical environment in which human rights and the rule of law are anchored….(More)”.

Data Policy in the Fourth Industrial Revolution: Insights on personal data


Report by the World Economic Forum: “Development of comprehensive data policy necessarily involves trade-offs. Cross-border data flows are crucial to the digital economy. The use of data is critical to innovation and technology. However, to engender trust, we need to have appropriate levels of protection in place to ensure privacy, security and safety. Over 120 laws in effect across the globe today provide differing levels of protection for data but few anticipated 

Data Policy in the Fourth Industrial Revolution: Insights on personal data, a paper by the World Economic Forum in collaboration with the Ministry of Cabinet Affairs and the Future, United Arab Emirates, examines the relationship between risk and benefit, recognizing the impact of culture, values and social norms This work is a start toward developing a comprehensive data policy toolkit and knowledge repository of case studies for policy makers and data policy leaders globally….(More)”.

Citizen science for environmental policy: Development of an EU-wide inventory and analysis of selected practices


EU Science Hub: “Citizen science is the non-professional involvement of volunteers in the scientific process, whether in the data collection phase or in other phases of the research.

It can be a powerful tool for environmental management that has the potential to inform an increasingly complex environmental policy landscape and to meet the growing demands from society for more participatory decision-making.

While there is growing interest from international bodies and national governments in citizen science, the evidence that it can successfully contribute to environmental policy development, implementation, evaluation or compliance remains scant.

Central to elucidating this question is a better understanding of the benefits delivered by citizen science, that is to determine to what extent these benefits can contribute to environmental policy, and to establish whether projects that provide policy support also co-benefit science and encourage meaningful citizen engagement.

EU-wide inventory 

In order to get an evidence base of citizen science activities that can support environmental policies in the European Union (EU), the European Commission (DG ENV, with the support of DG JRC) contracted Bio Innovation Service (FR), in association with Fundacion Ibercivis (ES) and The Natural History Museum (UK), to perform a “Study on an inventory of citizen science activities for environmental policies”.

The first objective was to develop an inventory of citizen science projects relevant for environmental policy and assess how these projects contribute to the Sustainable Development Goals (SDGs) set by the United Nations (UN) General Assembly.

To this end, a desk-research and an EU-wide survey were used to identify 503 citizen science projects of relevance to environmental policy.

The study demonstrates the breadth of citizen science that can be of relevance to environmental policy....Three salient features were found:

  • Government support, not only in the funding, but also through active participation in the design and implementation of the project appears to be a key factor for the successful uptake of citizen science in environmental policy.
  • When there is easy engagement process for the citizens, that is, with projects requiring limited efforts and a priori skills, this facilitates their policy uptake.
  • Scientific aspects on the other hand did not appear to affect the policy uptake of the analysed projects, but they were a strong determinant of how well the project could serve policy: projects with high scientific standards and endorsed by scientists served more phases of the environmental policy cycle.

In conclusion, this study demonstrates that citizen science has the potential to be a cost-effective way to contribute to policy and highlights the importance of fostering a diversity of citizen science activities and their innovativeness….(More)”.

Data scores


Data-scores.org: “Data scores that combine data from a variety of both online and offline activities are becoming a way to categorize citizens, allocating services, and predicting future behavior. Yet little is still known about the implementation of data-driven systems and algorithmic processes in public services and how citizens are increasingly ‘scored’ based on the collection and combination of data.

As part of our project ‘Data Scores as Governance’ we have developed a tool to map and investigate the uses of data analytics and algorithms in public services in the UK. This tool is designed to facilitate further research and investigation into this topic and to advance public knowledge and understanding.

The tool is made up of a collection of documents from different sources that can be searched and mapped according to different categories. The database consists of more than 5300 unverified documents that have been scraped based on a number of search terms relating to data systems in government. This is an incomplete and on-going data-set. You can read more in our Methodology section….(More)”.

A Grand Challenges-Based Research Agenda for Scholarly Communication and Information Science


Report by Micah Altman and Chris Bourg: “…The overarching question these problems pose is how to create a global scholarly knowledge ecosystem that supports participation, ensures agency, equitable access, trustworthiness, integrity, and is legally, economically, institutionally, technically, and socially sustainable. The aim of the Grand Challenges Summit and this report is to identify broad research areas and questions to be explored in order to provide an evidence base from which to answer specific aspects of that broad question.

Reaching this future state requires exploring a set of interrelated anthropological, behavioral, computational, economic, legal, policy, organizational, sociological, and technological areas. The extent of these areas of research is illustrated by the following exemplars:

What is necessary to develop coherent, comprehensive, and empirically testable theories of the value of scholarly knowledge to society? What is the best current evidence of this value, and what does it elide? How should the measures of use and utility of scholarly outputs be adapted for different communities of use, disciplines, theories, and cultures? What methods will improve our predictions of the future value of collections of information, or enable the selection and construction of collections that will be likely to be of value in the future?…

What parts of the scholarly knowledge ecosystem promote the values of transparency, individual agency, participation, accountability, and fairness? How can these values be reflected in the algorithms, information architecture, and technological systems supporting the scholarly knowledge ecosystem? What principles of design and governance would be effective for embedding these values?…

The list above provides a partial outline of research areas that will need to be addressed in order to overcome the major barriers to a better future for scholarly communication and information science. As the field progresses in exploring these areas, and attempting to address the barriers is discussed, new areas are likely to be identified. Even within this initial list of research areas, there are many pressing questions ripe for exploration….

Research on open scholarship solutions is needed to assess the scale and breadth of access,[68] the costs to actors and stakeholders at all levels, and the effects of openness on perceptions of trust and confidence in research and research organizations. Research is also needed in the intersection between open scholarship and participation, new forms of scholarship, information integrity, information durability, and information agency (see section 3.1.). This will require an assessment of the costs and returns of open scholarship at a systemic level, rather than at the level of individual institutions or actors. We also need to assess whether and under what conditions interventions directed at removing reputation and institutional barriers to collaboration promote open scholarship. Research is likewise required to document the conditions under which open scholarship reduces duplication and inefficiency, and promotes equity in the creation and use of knowledge. In addition, research should address the permeability of open scholarship systems to researchers across multiple scientific fields, and whether—and under what conditions—open scholarship enhances interdisciplinary collaboration….(More)”.

Draft Ethics guidelines for trustworthy AI


Working document by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG): “…Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society. It presents a great opportunity to increase prosperity and growth, which Europe must strive to achieve. Over the last decade, major advances were realised due to the availability of vast amounts of digital data, powerful computing architectures, and advances in AI techniques such as machine learning. Major AI-enabled developments in autonomous vehicles, healthcare, home/service robots, education or cybersecurity are improving the quality of our lives every day. Furthermore, AI is key for addressing many of the grand challenges facing the world, such as global health and wellbeing, climate change, reliable legal and democratic systems and others expressed in the United Nations Sustainable Development Goals.

Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximises the benefits of AI while minimising its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.

Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

These Guidelines therefore set out a framework for Trustworthy AI:

  • Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.
  • From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.
  • Chapter III subsequently operationalises the requirements by providing a concrete but nonexhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases. …(More)”